Free Essay

Cognitive Science

In: Computers and Technology

Submitted By yangpp
Words 4004
Pages 17
Digital analog:
Neuron: digital: spike or none; analog: number of spikes per second

Sparsity: % of neurons fire in response to a stimulus

How many objects a neuron respond: sparsity times total objects

ANN: weight, more input more output : algorithm and representational
Input times weight
Not threshold to fire

Turing test: computational

Visual fields: left visual field: nasal left eye, temporal right eye, right hemisphere
Right visual field: nasal right eye, temporal left eye

Color blindness: missing cones; common: no L or M cone
Cones not function at night
One class of rods, see in the night

Opponent processing:
Red/green: (L-M): differences between those 2 cones/ if miss L, then can’t tell red from green
Blue/yellow: (s-s+m/2)

Explicit: conscious

Episodic/semantic

Implicit: skill memory

LTP: stronger synaptic connection
Long term: grow more receptors on post synapse anatomical
Short term: amount of neurons

Turing machine

Single vs double dissociation
Single: one manipulation
Double: two manipulations

Visual angle

Grandmother cell a lot of cells respond for Halle Berry
Do not respond only to Halle Berry
Math: impossibly large number of neurons
Only 100 images do not necessarily show that those cells only respond to one concept
Size constancy: If no depth cue/ with out size constancy; then same visual angle same proximal size and same perceived size. s Alternative: different difficulties of those 2 tasks

Mediate by separate part of brain regions

Color constancy

Binding: different percepts

What is intelligence?
(Cartesian) Dualism, identity theory, functionalism
The Turing test (and objections to it)
Aunt Bertha machine
Linear vs. exponential scaling

Dualism: mind is nonphysical substance

Identity theory: same mind state means the same brain state
Problem of strict identity theory: 1. Think of the same thing (same mind state) at different times (different brain state) 2. Two people think of the same thing

Functionalism: mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role — that is, their causal relations to other mental states, sensory inputs, and behavioral outputs.

Therefore, it is different from its predecessors of Cartesian dualism(advocating independent mental and physical substances) and Skinnerian behaviourism and physicalism(declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its ‘software programs’.
Functionalism is the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.
For (an avowedly simplistic) example, a functionalist theory might characterize pain as a state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state, to produce anxiety, and, in the absence of any stronger, conflicting desires, to cause wincing or moaning. According to this theory, all and only creatures with internal states that meet these conditions, or play these roles, are capable of being in pain.
Examples:
a. The Turing Test.

b. "A clock is something that can be used to tell time."

d. "I don't care how you do it, just give me the result!”

The Turing Test: a test for machine’s ability to exhibit intelligent behavior (not necessary, not sufficient)

Objection: semantics, consciousness, intentionality

Aunt Bertha machine
Intelligence is the capacity to emit sensible sequences of responses to stimuli, so long as this is accomplished in a way that averts exponential explosion of search. ------Block

If an agent has the capacity to produce a sensible sequence of verbal responses to an arbitrary sequence of verbal stimuli without requiring exponential storage, then it is intelligent.

Block thought that how a machine passed the Turing test mattered. A machine that simply looked up the answer (or more generally, computed the answer in a manner that required exponentially increasing processing power/storage in the number of questions asked) was not in his view intelligent.

Linear vs. exponential scaling

Computers as models of intelligence
Turing machines
Be able to step through a TM program
Universal computation and why it matters
Limits to logic (incomplete, undecidable)
Computers versus brains

Turing machines
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.

Marr levels:

1.computational theory:
What is the goal of computation, why it is appropriate, and what is the strategy by which it can be carried out input, output, goal

Behavioral trichomacy; recognize ripe fruit by color (goal)

2.representation and algorithm:
How can this computation be implemented
What is the representation for the input and output
What is the algorithm for the transformation

Estimate probabilities of events using Bayes rule
Describe the world state using a list of symbols

3.hardware implementation how can representation and algorithmbe realized physically

Use a tape and a “read head” to read those symbols

Be able to step through a TM program

Old state, old symbol, new state, new symbol, direction of moving head

Universal computation and why it matters

One form of (abstract) computing which can emulate other computing method
The term "Universal Computation" refers to any method or process capable of computing anything able to be computed.
Modern computers are Turing machines
Can a single simple mechanism do all thinking? Do brains use a single thinking mechanism?
Is logic the right language to model thought?

Turing machines are important because they represent what computers can and cannot do. Any deterministic algorithm can be mapped to a Turing machine algorithm. If an algorithm is described as "Turing Complete," then it meets the requirements of a Turing Machine.

Limits to logic (incomplete, undecidable)

Complete: prove very provable
Consistent: Can’t prove P and ~P at the same time

Decidable: Can one tell what is provable

Godel: Mathematical logic is incomplete:
Not all statements can be proved true or false
The consistency of a logic system can’t be proved within itself

Turing:
Mathematical logic is undecidable:
There is no procedure for determining whether a proposition is provable
Used his own machine to prove that

Computers versus brains

Similarity:
Brains and computers are both information-processing devices: both receive input; store and retrieve data; compute; output results
Computation similar to or equal to thought Hardware doesn’t matter, same computation Logic is the language of thought Alternative: The “Swiss army knife” approach: many separate modules for different types of reasoning Hardware matters: how the hardware is wired determines how it works
Difference:
Digital vs. Analog
Digital computers and analog brains

Deterministic TMs and random brains
Program a pseudo-random number generator on a TM that is indistinguishable from true randomness

Computers:
Fast circuits
Small number of interconnected processors
Digital flow of information
Determinate, exactly repeatable behavior
Designed and replicated exactly
Small flaw or damage is catastrophic
Mostly programmed
Good at arithmetic and logic; bad at pattern recognition

Brains:
Slow circuits
Large number of interconnected processors
Quasi analog flow of information
Stochastic(random) behavior
Grow organically with lots of case-to-case variation
Small flaw or damage usually irrelevant
Some programming(by evolution), some learning(from individual experiences)
Bad at arithmetic and logic; good at pattern recognition

Perception as reconstruction
Eye as an optical instrument
What we perceive doesn’t always match reality
How do we know this (illusions)
Distal and proximal stimulus (e.g., distal and proximal size)
Computing visual angle from size & distance, or size from visual angle and distance.
Percepts are constructed to resolve ambiguity in sensory input
Can use this idea to understand illusions
Distance is taken into account when perceiving size
Experiment of Holway and Boring
For any system with ambiguous sensory input, illusions are unavoidable
Marr levels

Eye as an optical instrument
Cornea: refract light
Iris: control pupil
Pupil
Lens:
Retina
Fovea: central vision

Receptive field: The area (expressed either in the world or on the retina) where light can affect a neuron’s firing rate.

Blind spot
The optic disc or optic nerve head is the location where ganglion cell axons exit the eye to form the optic nerve. There are no light sensitive rods or cones to respond to a light stimulus at this point. This causes a break in the visual field called "the blind spot" or the "physiological blind spot".

Photoreceptor cells: photo transduction; convert light into signals that can stimulate biological processors

Rods and Cones
Rods: extremely sensitive; for night vision as no color at night
Cones: brighter light, 3 different types of cells, responsible for color vision

What we perceive doesn’t always match reality
How do we know this (illusions)

Our perception of the world is not simply the image projected on the retina. Rather, what we see is a construction of the mind.
Vision as reconstruction

Why:
Logical difficulty: if what we percept is simply the retinal image, who or what is looking at the image

Practical difficulty: the retinal image provides ambiguous information about the physical arrangement of the world around us

Distal and proximal stimulus (e.g., distal and proximal size)
Compute vision angle

Percepts are constructed to resolve ambiguity in sensory input

For any system with ambiguous sensory input, illusions are unavoidable

Color Vision
What is color good for?
Color matching experiment, behavioral trichromacy
Three classes of cones, biological trichromacy
Know how to compute isomerization rates of cones
The link between behavior and biology for color matching
Color blindness

What is color good for?
Possible functional advantages include:

i) Object identification - telling two objects of similar shape and size apart -- e.g., apple vs. orange, finding my car in a parking lot of similar cars. ii) Scene segmentation - grouping together different image regions on the basis of color to find an object: e.g., oranges amongst leaves, the letter example from class. iii) Signaling - e.g., traffic lights, emotions based on facial color, poison frogs

Some examples tend to cross categories, and grading was fairly lenient unless the 'two' examples really seemed like the same function with a different name.

Opponent process theory:
Neurons integrate the excitatory and the inhibitory signals from receptors

Response to one color of an opponent channel are antagonistic to those to the other; so never perceived together
Phenomenological observation: afterimage
Physiological evidence: opponent neurons
Red/Green Blue/Yellow Black/White

Behavioral trichromacy: essentially any light may be matched by a mixture of three primaries: red, green, and blue

Biological trichromacy: S, M, L cones

Conditioning, Reinforcement Learning
Classical (Pavlonian) vs. operant conditioning
CS, CR, US, UR
Different forms of reward/punishment
Learning, extinction, and blocking in classical conditioning
Rescorla-Wagner model
Know how to update association strength with RW model
Understand how R/W accounts for blocking
Reinforcement learning for machines
Mouse in a maze example
Know how to do updates for RL
Intuition, learning occurs when you are surprised by an event
Double dissociation logic

Neural basis of learning Neurons
Synapse, dendrite, axon
Action potential (aka “spike”), firing rates, neurotransmitters
Inhibition, excitation
Long-term potentiation (LTP), habituation
Modularity
Brain is divided into specialized regions
Why, computationally, one needs modularity
Semantic, episodic, and skill learning, implicit vs explicit memory working memory
Role of Hippocampus in memory formation

Neurons
All or none principle
If a neuron responds at all, then it must respond completely. Greater intensity of stimulation does not produce a stronger signal but can produce more impulses per second.
Soma, nucleus
Dendrite Axon
Electrical impulses, action potential, spike
Synapse: receptor neurotransmitters

Firing rate: a time/10 milliseconds / 100 times per second

Receptive field: light affects neuron fire rating
Inhibition
Excitation

Rate of firing depends on the strength of the stimulus
Both excitatory and inhibitory inputs
Synaptic weights change with experiences: strengthen frequently used synapses: learning

Up to 80% of the neurons within forming nervous system die:
1.this ensures that adequate numbers of neurons establish appropriate connections
2.new neuron connections can be established

Habituation
A form of non-associative learning, is the psychological process in humans and other organisms in which there is a decrease in psychological and behavioral response to a stimulus after repeated exposure to that stimulus over a duration of time.

Decrease in strength of a behavioral reaction to a stimulus, which is repeatedly presented. : fewer ions enter the sensory system; fewer packages of neurotransmitters released; decreased firing rates

Habituation need not be conscious In this way, habituation is used to ignore any continual stimulus, presumably because changes in stimulus level are normally far more important than absolute levels of stimulation. This sort of habituation can occur through neural adaptation in sensory nerves themselves and through negative feedback from the brain to peripheral sensory organs.

Long-term potentiation (LTP)
Increase in strength of connections between neurons
Neurons that fire together, wire together long-term potentiation (LTP) is a long-lasting enhancement in signal transmission between two neurons that results from stimulating them synchronously.

New receptor forms, new type of neurotransmitters into use
Anatomical change

Modularity
Brain is divided into specialized regions

Memory:
Prefrontal cortex: working memory
Hippocampus: long term memory
Cerebellum / motor cortex: skill memory

Modules: function units that are also chunks of brain
Other examples: visual system: color, motion, form Attention; language: syntax and semantics

Adjacent sensory inputs (body part) tend to map to adjacent parts of the brain. Visual cortex: located in occipital cortex

Information from the left and right visual fields is initially processed separately by the right and left hemisphere

Connected to
1.The dorsal pathway: the parietal cortex: associated with motion, representation of object location the where or how pathway

2.The ventral pathway: the temporal cortex: associated with recognition and object representation; storage of long-term memory the what pathway

Why, computationally, one needs modularity
Scaling problem: cannot connect every neuron to every other one
10billion neurons each randomly connect to k others

k too small: nothing connected to both red and bicycle; make the binding problem difficult to solve (synchronous firing of neurons)

k too large: everything connected (not feasible in a physical system)

just right: k: square root of n 100,000 human brain: 40,000

physics requires brain to be modular

Functional MRI
Functional magnetic resonance imaging or functional MRI (fMRI) is a type of specialized MRI scan used to measure the hemodynamic response (change in blood flow) related to neural activity in the brain or spinal cord of humans or other animals. It is one of the most recently developed forms of neuroimaging. Since the early 1990s, fMRI has come to dominate the brain mapping field due to its relatively low invasiveness, absence of radiation exposure, and relatively wide availability.

Semantic, episodic, and skill learning, implicit vs explicit memory
Skill memory: motor cortex, cerebellum

Semantic memory: Semantic memory refers to the memory of meanings, understandings, and other concept-based knowledgeunrelated to specific experiences. The conscious recollection of factual information and general knowledge about the world[1] is generally thought to be independent of context and personal relevance.

Episodic memory is the memory of autobiographical events (times, places, associated emotions, and other contextual knowledge) that can be explicitly stated.Semantic and episodic memory together make up the category of declarative memory, which is one of the two major divisions in memory. The counterpart to declarative, or explicit memory, is procedural memory, or implicit memory.[1]

Working memory
Short-term memory
Prefrontal cortex

Working memory is the ability to actively hold information in the mind needed to do complex tasks such as reasoning, comprehension and learning. Working memory tasks are those that require the goal-oriented active monitoring or manipulation of information or behaviors in the face of interfering processes and distractions. The cognitive processes involved include the executive and attention control of short-term memory which provide for the interim integration, processing, disposal, and retrieval of information.

Implicit memory is a type of memory in which previous experiences aid in the performance of a task without conscious awareness of these previous experiences.
In daily life, people rely on implicit memory every day in the form of procedural memory, the type of memory that allows people to remember how to tie their shoes or ride a bicycle without consciously thinking about these activities.

Explicit memory is the conscious, intentional recollection of previous experiences and information. People use explicit memory throughout the day, such as remembering the time of an appointment or recollecting an event from years ago.
Explicit memory involves conscious recollection, compared with implicit memory which is an unconscious, nonintentional form of memory. Remembering a specific driving lesson is an example of explicit memory, while improved driving skill as a result of the lesson is an example of implicit memory.

Role of Hippocampus in memory formation
Long-term memory
The hippocampus is a major component of the brains of humans and other mammals. It belongs to the limbic system and plays important roles in the consolidation of information from short-term memory to long-term memory and spatial navigation.

Connectionism and modularity
Local versus distributed representations
Grandmother neuron concept (localist representation)
Why can’t it be right?
Alternatives to it?
Sparsity, numbers of neurons responding calculation
Binding problem
Artificial neural networks (ANNs)
Similarities and differences between real and artificial neurons
Compute simple network response
Brains have a modular design
Organization of sensory/motor areas

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units.

Local versus distributed representations
Grandmother neuron concept (localist representation)

Thus your mental image of your grandmother may reside not in a few specialized neurons, but rather in a large population of neurons distributed across various cortical areas. The functional co-operation of neurons that discharge synchronously might thus be the "glue" that takes all the distinct elements analyzed separately by the brain and binds them into a coherent whole.

the key to the problem of how the various parallel processing modules link up with one another, known as the "binding problem," may lie in the synchronous discharge of populations of neurons in very different areas of the brain.

The grandmother cell hypothesis, more technically known as sparseness,[17][18] is not universally accepted. The opposite of the grandmother cell theory is the distributed representation theory, that states that a specific stimulus is coded by its unique pattern of activity over a group of neurons.
The arguments against the sparseness include: 1. According to some theories, one would need thousands of cells for each face, as any given face must be recognised from many different angles – profile, 3/4 view, full frontal, from above, etc. 2. Rather than becoming more and more specific as visual processing proceeds from retina through the different visual centres of the brain, the image is partially dissected into basic features such as vertical lines, colour, speed, etc., distributed in various modules separated by relatively large distances. How all these disparate features are re-integrated to form a seamless whole is known as the binding problem.

Grandmother cells
Why can’t it be right?
Problems with this idea: (1) there are just too many different stimulus in the environment to assign specific neurons to each one
(impossibly many specialized neurons) (2) although there are neurons respond only to specific types of stimulus, like faces, even those neurons respond to a number of different faces

Alternatives to it?
Distributed representation:
Fundamentally, a distributed representation is one in which meaning is not captured by a single symbolic unit, but rather arises from the interaction of a set of units, normally in a network of some sort. In the case of the brain, the concept of ‘grandmother’ does not seem to be represented by a single ‘grandmother cell,’ but is rather distributed across a network of neurons

states that a specific stimulus is coded by its unique pattern of activity over a group of neurons.
In a distributed representation, there is no neuron or small set of neurons that represent any single concept

Advantage: 1. allow precise prediction with imprecise neurons 2. robust to failure of neurons 3. allow sharing between related concepts and neighboring fingers or toes

Sparsity, numbers of neurons responding calculation Calculation * On the order of 109 neurons in the human medial temporal lobes * 0.5% sparseness implies 0.005 x 109 = 5 million neurons are activated by a typical stimulus * A typical adult recognizes 10,000 - 30,000 discrete objects * implies that each neuron fires in response to 50–150 distinct representations. * 104 x 5x106 / 109 = 50

Binding problem

the problem of how to connect the different attributes corresponding to a single object/concept in the brain, for example, how different neurons/groups of neurons representing “racing” “bicycle”, “Violet” and “Lydia” are connected to form the concept “Violet, Lydia's racing bicycle”

How to relate different percepts, possibly at different times? * See, hear and smell a tomato * Touch stove, then feel pain

the key to the problem of how the various parallel processing modules link up with one another, known as the "binding problem," may lie in the synchronous discharge of populations of neurons in very different areas of the brain.

Maybe achieved by synchronized neuron firing
Synchronized neuron firing

Artificial neural networks (ANNs)
1.Highly simplified models of neurons
2.Often combined in layers
3.Receive input and calculate features of them in parallel
4.learn by adjusting weights
5.use distributed representation

But do not capture the complexity of real neurons or brain structures

Similarities and differences between real and artificial neurons
There were quite a few different of ways that were acceptable. Here are four:

1. Real neurons transmit information using action potentials while ANN "neurons" simply transmit a number.

2. Real neurons transmit signals one to another using neurotransmitters, while ANN "neurons" simply connect their output directly to the input of the next "neuron" in the processing chain.

3. Real neurons can (and do) die, while ANN "neurons" do not.

4. Real neurons have a maximum response rate, while linear ANN "neurons" do not. (Here it is key to say linear, as some forms of ANNs do incorporate a maximum response.)

Statements about ANN's as a whole (for example, "ANN's are not robust to the failure of a single of their "neurons", while brains do so such robustness") did not receive credit as the question was about individual neurons.

Compute simple network response

Brains have a modular design
Modularity is a general systems concept, typically defined as a continuum describing the degree to which a system’s components may be separated and recombined.[1] It refers to both the tightness of coupling between components, and the degree to which the “rules” of the system architecture enable (or prohibit) the mixing and matching of components.
1.Hundreds of functional areas and subareas
2.Physics requires the brain to be constructed for no enough room for all neurons to be interconnected
3.use distributed data representations: Adjacent sensory inputs tend to map to adjacent brain regions Visual cortex Somatosensory mappings

Double dissociation

Single dissociation: dissecting complex mental tasks into subcomponents;
This is done by demonstrating that a lesion to brain structure A disrupts function X but not function Y.

Double dissociation:
This is the demonstration that two experimental manipulations each have different effects on two dependent variables; if one manipulation affects the first variable and not the second, the other manipulation affects the second variable and not the first.[3] If one can demonstrate that a lesion in brain structure A impairs function X but not Y, and further demonstrate that a lesion to brain structure B impairs function Y but spares function X, one can make more specific inferences about brain function and function localization.

Similar Documents

Free Essay

Cognitive Science Methods

...Kimberly Ann McBean PSY 314 Professor Sailor Spring 2015 Methods Participants XXX participants participated in a memory experiment to receive classroom credit. These participants were recruited from 3 upper level psychology classes, XXX from cognitive psychology class and XXX from 2 experimental psychology classes at a public university, 66% of the samples were majority female. Materials In this experiment a 2 (number of presentations= once or twice words shown) x 2 (word frequency= low or high) design was used. The participants were presented with a list of 36 words on an overhead projector (see Appendix for a complete list of words used in this experiment) all the words were white, same size letters, same font as well. All words presented had a black background; the concreteness and word length was the same. The list consists of 18 high frequency words and 18 low frequency words. Participants were also given a sheet of paper numbered 1-36. Participants were instructed to study and to write down an estimate (on the numbered paper given) of the chance of how well they would do at recalling each word on a later test (i.e. if you thought you had 80% chance of recalling a word, you would write down 80). After participants were presented with the words once, they were instructed that there would be an additional chance for them to study some of the words (without making recall estimates). While participants were instructed to write down estimate times, they were also......

Words: 784 - Pages: 4

Free Essay

Space (and Time) for Culture

...University, Germany Lisa Hüther (lisa.huether@psychologie.uni-freiburg.de) Department of Psychology, Freiburg University, Germany Space is a fundamental domain for cognition, and research on spatial perception, orientation, referencing, and reasoning addresses core questions in most of the disciplines that make up the cognitive sciences. Consequently, space represents one of those domains for which various disciplinary interests overlap to a substantial extent. For instance, the question of whether and how spatial cognition and language interact has been one of the core questions since early on (e.g., Clark, 1973; Miller & Johnson-Laird, 1976), and yet, consensus between psychologists and linguists is difficult to achieve (e.g., Li & Gleitman, 2002, vs. Levinson et al., 2002). Perhaps most controversial in this dispute is the extent to which spatial cognition is culturally variable (for linguistic variability, see also Evans & Levinson, 2009, and comments there-in). Expanding the space of cognitive science research to ‘nonstandard’ cultures (Henrich et al., 2010; Medin et al., 2010) is thus crucial for the advancement of cognitive science. For this very reason, cross-disciplinary...

Words: 1607 - Pages: 7

Premium Essay

Concepts and Prototypes

...depending on how well it is perceived in our minds. Perhaps most concepts are components of theories or explanations. Unquestionably, changes of theory change concepts, and new concepts, or revisions of old ones, can alter theories. In psychology, concepts of mind must be developed or discovered, much as in physics, for we cannot see at all clearly into our own minds by introspection. So we need experiments in psychology; they sometimes suggest concepts far removed from common sense, or what we seem to be like. A concept is typically associated with a corresponding representation in a language or symbology; however, some concepts do not have a linguistic representation. The meaning of "concept" is explored in mainstream information science, cognitive science, metaphysics, and philosophy of the mind. A prototype on the other hand, is an early sample or model built to test a concept or process or to act as a thing to be replicated or learned from. Prototype is a term used in a variety of contexts, including semantics, design, electronics, and software programming. A prototype is created to test and experiment a new design to improve precision by system analysts and users. Prototyping helps to offer specifications for a real, working system rather than a theoretical one. They are handmade early versions of...

Words: 667 - Pages: 3

Free Essay

Computational Linguistics

...Computational Linguistics   Computational linguistics (CL) is a discipline between linguistics and computer science which is concerned with the computational aspects of the human language faculty. It belongs to the cognitive sciences and overlaps with the field of artificial intelligence (AI), a branch of computer science aiming at computational models of human cognition. Computational linguistics has applied and theoretical components. Theoretical CL takes up issues in theoretical linguistics and cognitive science. It deals with formal theories about the linguistic knowledge that a human needs for generating and understanding language. Today these theories have reached a degree of complexity that can only be managed by employing computers. Computational linguists develop formal models simulating aspects of the human language faculty and implement them as computer programmes. These programmes constitute the basis for the evaluation and further development of the theories. In addition to linguistic theories, findings from cognitive psychology play a major role in simulating linguistic competence. Within psychology, it is mainly the area of psycholinguistics that examines the cognitive processes constituting human language use. The relevance of computational modelling for psycholinguistic research is reflected in the emergence of a new subdiscipline: computational psycholinguistics. Applied CL focusses on the practical outcome of modelling human language use. The......

Words: 880 - Pages: 4

Premium Essay

Language Essay

...Language Essay Ryan Butler Psychology 360 August 29, 2011 Professor Newlin LANGUAGE Have you ever wondered how we speak? How about why our communication is considered a language and other animal’s communication is not considered language? A wide range of beliefs exist on what defines language. Thus, by exploring the definition of language and lexicon, evaluating language’s key features, the four levels of language structure and processing, and the role of language in Cognitive Psychology, an understanding of what language is becomes clear. Let us begin by defining language and a term named lexicon. LANGUAGE AND LEXICON DEFINITION One big question, when the subject of language comes up, is exactly what language is. What constitutes something as a language? By explaining one definition of a language, and a term associated with language, called a lexicon, a definition of language transpires. Thus, the Willingham (2007) text mentions four certain characteristics communication must possess to, officially, be considered a language.  One of these characteristics is that language must be communicative, and thus be communication between individuals in some form or another.  Secondly, the symbols standing for words must be arbitrary, and thus have no reason for representing a word.  Third, a language must be structured, and not arbitrary.  For example, if I say a dog was walking on a sidewalk I cannot say a sidewalk was walking on a dog. etc.  Fourth, a language must......

Words: 1420 - Pages: 6

Premium Essay

Hahaa

...the most heated arguments against science, particularly psychology and biology, seem to center around a perceived threat against humanity. For example, evolution was and still is challenged, in large part, because to believe in evolution means accepting natural selection and similarity among evolved species. Evolution threatens the uniqueness and even the superiority of humankind, according to many opponents. Similarly, the possibility of language in primates is refuted by many, I believe, in large part because this cognitive ability has been believed to be reserved only for humans. John Searle seems to be making a similar argument against what he refers to as “strong” artificial intelligence. Searle argues that “instantiating a program” (422) cannot lead to understanding as a human, or even an animal understands. Searle argues that machines or programs lack “intentionality”, and are meaningless. I sympathize with Searle that it is difficult to accept a machine that shares cognitive capabilities with a human. Such a hypothesis seems to challenge the core values of humanity, such as our individuality and our unpredictability, or diversity. The rational human mind is something that has set humans apart from all other things even before Descartes’ “cogito ergo sum.” However just as evolution and language capabilities among primates have challenged core beliefs about humanity, more recent discoveries made in cognitive neuroscience and cognitive...

Words: 578 - Pages: 3

Premium Essay

Artificial Intelligence

...A uniform artificial intelligence, or AI, definition is that the computer simply imitate the behavior of people who would be intelligent, if a person has done. But even within this definition, various problems and views on how to prevent the interpretation of the results of the AI programs, the scholars and critics. The most common and natural for AI research approach is for each program, what it can do, you ask? What are the actual results of human intelligence? For example, what counts is a chess program that is good. Is it possible that chess grandmaster has expired? There is also a more structured approach for the evaluation of artificial intelligence, began at the door of artificial intelligence to open contribution to the world of science approach. According to this theoretical approach, what matters are not the input-output relationships of the computer, but also what the program you may contact us using the genuine human knowledge (Ptack, 1994) call will tell. From this point of artificial intelligence, not only a business or commercial advantage, but also an understanding and a nice extra for anyone who knows how to use a calculator to be increasing. You can overcome any mathematical life in multiplication and division, so it qualifies as intelligent beings in the definition of artificial intelligence. This does not mean to entertain the psychological aspect of artificial intelligence, as these teams...

Words: 999 - Pages: 4

Premium Essay

Artificial Intelligence

...of artificial intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. However, within this definition, several issues and views still conflict because of ways of interpreting the results of AI programs by scientists and critics. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in comparison to human intelligence? For example, what matters about a chess-playing program is how good it is. Can it possibly beat chess grand masters? There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world. According to this theoretical approach, what matters is not the input-output relations of the computer, but also what the program can tell us about actual human cognition (Ptack, 1994). From this point of view, artificial intelligence can not only give a commercial or business world the advantage, but also a understanding and enjoyable beneficial extend to everyone who knows how to use a pocket calculator. It can outperform any living mathematician at multiplication and division, so it qualifies as intelligent under the definition of artificial intelligence. This fact does not entertain the psychological aspect...

Words: 1106 - Pages: 5

Premium Essay

Cognitive Psychology

...Cognitive psychology is the study of the mind. To be more specific, it is the study of how one thinks, remembers, learns, and perceives; the mental processes. It shows us how a group of people can view the same object and yet form different conclusions on what the object is. Cognitive is one of the newer fields of psychology. It is only 50 years old (Willingham, 2007). It was finalized as its own branch in response to the lack of information provided from previous branches psychology. No other branch truly dealt with how and why a person thought or was able to learn and remember. Two keys components of the workings of the human mind. Granted these two key components helped open the door for cognitive psychology, several key milestones helped get cognitive psychology’s feet through the door. These key milestones include the missteps of behaviorism, information processing and the computer metaphor, artificial intelligence, and neuroscience. The Missteps of Behaviorism Behaviorism came into the world of psychology and appeared to the solution for it all. The key was to study the actions of a person. The mind was of no consequence. For quite a few years, there were not any doubts about behaviorism. Behaviorism had a good run but it could not answer questions about a human’s mind. After all, to behaviorists the mind was not important. Behaviorists believed that everything they learned from experiments on animals, applied to humans. Questions were now being asked about how......

Words: 985 - Pages: 4

Premium Essay

Submission

...Cognitive psychology is defined as the study of mental processes. Mental processes can be classified as problem solving, thinking, remembering, speaking, perceiving, learning, and even reasoning. Cognitive psychology is mainly based on studying how a person obtains and stores information from the world that they live in. It also studies the way that people use this information as a beneficial factor or how they understand things. Cognitive psychology was said to get its original rise as a response from other approaches that had been proved to have flaws. There was also a link between the studies of the mind that eventually lead to the study of behavior. Since behaviorism had some minor flaws, the development of cognitive psychology occurred because of the disagreements of behaviorism. There are many different key milestones in the development of our own cognitive psychology. Neuroscience, information processing, criticisms of behaviorism, and connectionism are 4 of these milestones. All four are associated with different aspects of the mind. The milestone neuroscience, is said to examine how the nervous system and the brain work together to determine behaviors. People who specialize in this field are referred to as neuroscientists. They are able to account for various behaviors such as intelligent behavior through the tactics of hypothetical representations, abstract constructs, and processes. Neuroscientists are able to use techniques of localization in identifying brain......

Words: 713 - Pages: 3

Premium Essay

Cognitive Psy

...Cognitive psychology was introduced when there were flaws found in the areas of behaviorism (Galotti, 2014). The field of behaviorism began moving their concerns towards observable behaviors instead of focusing on the mind (Galotti, 2014). Since this was occurring cognitive psychology was born. This aspect of psychology began emphasizing on how the mind thinks and functions (Galotti, 2014). For instance, cognitive psychology encompasses areas of learning, memory, attention, perception, reasoning, language, conceptual development, and decision making (Galotti, 2014). It is defined as the scientific study of mental processing (Galotti, 2014). Cognitive psychology concentrates on how an individual stores, processes, acquires, and interprets the world around them. And it also tries to classify certain behaviors that are presented through different characteristics (Willingham, 2007). Once this area of psychology was introduced it brought back the importance of studying the mind. In the next following sections they will cover the key milestones in the development of cognitive psychology and the importance of behavioral observation in this field. Key milestones in the development of cognitive psychology There were four key milestones that had a hand in developing cognitive psychology. The milestones were: neuroscience, information processing model, artificial intelligence, and the criticism of behaviorism (Carley, 2012). The criticism that behaviorism received......

Words: 1037 - Pages: 5

Premium Essay

Artificial Intelligeence

...of artificial intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. However, within this definition, several issues and views still conflict because of ways of interpreting the results of AI programs by scientists and critics. The most commn and natural approach to AI research is to ask of any program, what can it do? What are the actual results in comparison to human intelligence? For example, what matters about a chess-playing program is how good it is. Can it possibly beat chess grand masters? There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world. According to this theoretical approach, what matters is not the input-output relations of the computer, but also what the program can tell us about actual human cognition (Ptack, 1994). From this point of view, artificial intelligence...

Words: 910 - Pages: 4

Premium Essay

Cognitive Psychology

...Cognitive Psychology Definition Cognitive Psychology Definition (Scholarpedia, 2007) states “Cognitive psychology is the scientific investigation of human cognition, that is, all our mental abilities – perceiving, learning, remembering, thinking, reasoning, and understanding. It is closely related to the highly interdisciplinary cognitive science and influenced by artificial intelligence, computer science, philosophy, anthropology, linguistics, biology, physics, and neuroscience” (Dosher, Lin-Lu, 2007, p. 2769). Cognitive psychology uses experiments and the scientific method to establish how humans transform sensory input into one’s own thoughts, which in turn becomes the individual’s actions through the intricate series of one’s cognition (Willingham, 2007). In the beginnings of the 20th Century cognitive psychology declined, because of the rise in behaviorism. In the mid- 1950’s the cognitive revolution developed because of the lack of behaviorism ideas and understanding “between memory and performance, and complex learning” (Dosher, Lin-Lu, 2007, p. 2769). Cognitive psychology began to come into play with the support of brand new technology, concepts that were abstract, and neuroscience (Willingham, 2007).     Milestones in the Development of Cognitive Psychology As mentioned earlier behaviorism begin to accumulate problems around the mid- 1950s. One of the considerable problems......

Words: 952 - Pages: 4

Premium Essay

Library Project

...Tracey Tokuhama-Espinosa, Ph.D. Journal Name: John Hopkins School of Learning http://education.jhu.edu/ A BRIEF HISTORY OF THE SCIENCE OF LEARNING: Part 2 (1970s-present) Year/Issue#/Page#s: Winter 2011 Journal/Vol. IX No. 2/ http://education.jhu.edu/PD/newhorizons/Journals/Winter2011/Tokuhama5 Thesis/ Claim(in my words): Essentially I believe what the author is doing is showing us a history of or evolution of teaching and of learning. All of which have brought us to this second part of the article where she believes we are now which is in the era of MBE teaching and learning. Which is further implicated in the reference to the other articles in the book Mind, Brain, and Education Science: A comprehensive guide to the new brain-based teaching (W.W. Norton). Part Two: Review of the Article This essay is more based on the factual history of teaching and learning and veers off into the topic or argument which is about the MBE style of teaching/learning. But in this second part, we see in the mid to late 80’s the birth of neuroscience. By the late 1980’s scientists, psychologists and the like started many foundations so as to study more on these new concepts of the mind. Then as the 90’s brought forth the “decade of the brain” two basic types of learning theories were strengthened: modular, domain specific versus global theories. Newly termed cognitive neuroscientists begin doing experiments ‘based on global theories of how the brain worked in terms of teaching and......

Words: 958 - Pages: 4

Premium Essay

Theoretical Models of Decision Making

...Theoretical models of decision-making, and their neuroscientific underpinnings Introduction In this essay I would like to focus the theoretical models of decision making that have come from psychology, cognitive and ecological alike, and review relevant literature from cognitive neuroscience that may or may not provide neural foundation for the claims that they have formulated. The reason for which I find it interesting to contrast these two approaches is there different outlook on the concept of “bias”. Traditional – closed systems - approaches to decision-making The investigation of decision-making is a multidisciplinary endeavor with researchers approaching the area from different fields and applying numerous different models (Hastie, 2001). The normative model of decision-making originates from mathematics and economics and the most prominent normative model is perhaps Subjectively Expected Utility (SEU; Savage, 1954). This model of rational behavior implies that people act as if they are calculating the "expected utility" of each option, choosing the one that they believe has the highest value. It has been criticized however, as some researchers doubted whether humans actually perform the mental multiplications and additions suggested by SEU. Simon (1955) was the first to challenge the assumptions of optimizing decision theories (such as SEU) making strong arguments concerning the limited capacity of the decision maker, for which he introduced the term “bounded......

Words: 4800 - Pages: 20