Word meaning as function of the composition of its phonemesHow seriously do modern linguists take the idea of phonesthemes?What's the “state of the art” for methodology in syntactic/semantic experimentsDoes the syllable/word ratio in a language determine the number of vowel phonemes it has?How common are the different semantic types of compounds?influence of the structure of a sentence on its semanticsThe Correct Research Methodology To Substantiate If an Expression is an Idiom?How linguists select phonemes to construct an alphabet for a language
Why exactly is the answer 50 ohms?
What do you call a document which has no content?
What is the name for a fluid transition between two tones? When did it first appear?
Where is the 'zone of reversed commands...'?
How to create a vimrc macro using :sort?
Is it possible to have 2 ports open on SSH with 2 different authentication schemes?
How to give a rationality-inducing drug to an entire software company?
Novel with a mix of real world and gods
Any Issues with running Workbench 1.3 on a Kickstart 1.2 Amiga 500?
Any way to automatically find and save any .ai file as .svg?
How to prove that invoices are really UNPAID?
What does "inet" stand for in the ip utility?
3-prong to 4-prong conversion - EXTRA MISLABELLED WIRES - Dryer cable upgrade and installation
2 Guards, 3 Keys, 2 Locks
Do you say "good game" after a game in which your opponent played poorly?
Why is beatboxing called 「ヒューマンビートボックス」?
The travel to a friend
What does this text mean with capitalized letters?
How to see time in ~/.bash_history file
Why does b+=(4,) work and b = b + (4,) doesn't work when b is a list?
Translate "Everything burns" into classical Latin
Who is Sifter, and what is "the so-called Sifter flare"?
What is the missing letter?
Damage Points-Hit Points Calculation
Word meaning as function of the composition of its phonemes
How seriously do modern linguists take the idea of phonesthemes?What's the “state of the art” for methodology in syntactic/semantic experimentsDoes the syllable/word ratio in a language determine the number of vowel phonemes it has?How common are the different semantic types of compounds?influence of the structure of a sentence on its semanticsThe Correct Research Methodology To Substantiate If an Expression is an Idiom?How linguists select phonemes to construct an alphabet for a language
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
.everyonelovesstackoverflowposition:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;
tl;dr
Linguists like to claim that the mapping from sounds to word meanings is mostly arbitrary. Can you point out research that supports this claim? Specificllay I am looking for hard evidience in form of experimental research and not arm chair linguistics.
Details
Over the years I have repeatedly heard the claim, that the meaning of words is not compositionally computed from its constituent phonemes. In other words, the mapping from sounds to meaning is arbitrary. Whenever somebdy made this claim it obviously was always restricted to atomic words such as house or tree rather than compounds like treehouse. Nobody would argue the meaning of compounds is not derived from its constituent words.
Let me state this more formally, so we understand each other correctly.
(no latex support? really? uff...)
For atomic words (not compounds, not words with productive affixes like un, etc.), there is assumed to be a mapping
f : W_p --> W_s
where W_p denotes the set of phonological representations of the concepts of all words and W_s denotes the set of semantics of all words. The consensus seems to be that this mapping is just a big lookup table that contains only entire atomic words.
We do not assume the alternative function
g : P* --> W_s
where P* denotes the Keleene closure over the set of all phonemes (of a given language).
Let's also require that
forall w in W_p where w = p_0 .. p_n with p_i in P* . f(w) =~ g(p_0 .. p_n)
where =~ means approximately equal under some metric. For example the L2 norm of the difference vector of the vectors of f(w) and g(p_0 .. p_n) if f and g map into an n-dimensional vector space.
(We could define f(a) =~ g(b) to be true, iff g(b) is among the k closest vectors to f(a) for example)
g in contrast to f internally performs some computation on the sequence of n input phonemes p_0 .. p_n. It does not perform a simple lookup of p_0 .. p_n. It only looks up either single phonemes or a limited number of phoneme combinations and then computes their composite meaning according to some unknown procedure.
So while f and g are extensionally equivalent (up to the error allowed for =~) they differ intensionally.
However, to this date, nobody claiming that g is not how sequences of sounds are mapped to meaning, but rather that f is how it happens, ever provided any research papers to back this up.
Can you point out to me any papers that investigated this and tried to falsify the assumption that a function like g exists? i.e. that the meaning of atomic words is computed from some composition of its constituent phonemes.
phonology semantics psycholinguistics lexical-semantics cognitive-linguistics
|
show 13 more comments
tl;dr
Linguists like to claim that the mapping from sounds to word meanings is mostly arbitrary. Can you point out research that supports this claim? Specificllay I am looking for hard evidience in form of experimental research and not arm chair linguistics.
Details
Over the years I have repeatedly heard the claim, that the meaning of words is not compositionally computed from its constituent phonemes. In other words, the mapping from sounds to meaning is arbitrary. Whenever somebdy made this claim it obviously was always restricted to atomic words such as house or tree rather than compounds like treehouse. Nobody would argue the meaning of compounds is not derived from its constituent words.
Let me state this more formally, so we understand each other correctly.
(no latex support? really? uff...)
For atomic words (not compounds, not words with productive affixes like un, etc.), there is assumed to be a mapping
f : W_p --> W_s
where W_p denotes the set of phonological representations of the concepts of all words and W_s denotes the set of semantics of all words. The consensus seems to be that this mapping is just a big lookup table that contains only entire atomic words.
We do not assume the alternative function
g : P* --> W_s
where P* denotes the Keleene closure over the set of all phonemes (of a given language).
Let's also require that
forall w in W_p where w = p_0 .. p_n with p_i in P* . f(w) =~ g(p_0 .. p_n)
where =~ means approximately equal under some metric. For example the L2 norm of the difference vector of the vectors of f(w) and g(p_0 .. p_n) if f and g map into an n-dimensional vector space.
(We could define f(a) =~ g(b) to be true, iff g(b) is among the k closest vectors to f(a) for example)
g in contrast to f internally performs some computation on the sequence of n input phonemes p_0 .. p_n. It does not perform a simple lookup of p_0 .. p_n. It only looks up either single phonemes or a limited number of phoneme combinations and then computes their composite meaning according to some unknown procedure.
So while f and g are extensionally equivalent (up to the error allowed for =~) they differ intensionally.
However, to this date, nobody claiming that g is not how sequences of sounds are mapped to meaning, but rather that f is how it happens, ever provided any research papers to back this up.
Can you point out to me any papers that investigated this and tried to falsify the assumption that a function like g exists? i.e. that the meaning of atomic words is computed from some composition of its constituent phonemes.
phonology semantics psycholinguistics lexical-semantics cognitive-linguistics
2
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
2
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
1
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
3
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
2
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06
|
show 13 more comments
tl;dr
Linguists like to claim that the mapping from sounds to word meanings is mostly arbitrary. Can you point out research that supports this claim? Specificllay I am looking for hard evidience in form of experimental research and not arm chair linguistics.
Details
Over the years I have repeatedly heard the claim, that the meaning of words is not compositionally computed from its constituent phonemes. In other words, the mapping from sounds to meaning is arbitrary. Whenever somebdy made this claim it obviously was always restricted to atomic words such as house or tree rather than compounds like treehouse. Nobody would argue the meaning of compounds is not derived from its constituent words.
Let me state this more formally, so we understand each other correctly.
(no latex support? really? uff...)
For atomic words (not compounds, not words with productive affixes like un, etc.), there is assumed to be a mapping
f : W_p --> W_s
where W_p denotes the set of phonological representations of the concepts of all words and W_s denotes the set of semantics of all words. The consensus seems to be that this mapping is just a big lookup table that contains only entire atomic words.
We do not assume the alternative function
g : P* --> W_s
where P* denotes the Keleene closure over the set of all phonemes (of a given language).
Let's also require that
forall w in W_p where w = p_0 .. p_n with p_i in P* . f(w) =~ g(p_0 .. p_n)
where =~ means approximately equal under some metric. For example the L2 norm of the difference vector of the vectors of f(w) and g(p_0 .. p_n) if f and g map into an n-dimensional vector space.
(We could define f(a) =~ g(b) to be true, iff g(b) is among the k closest vectors to f(a) for example)
g in contrast to f internally performs some computation on the sequence of n input phonemes p_0 .. p_n. It does not perform a simple lookup of p_0 .. p_n. It only looks up either single phonemes or a limited number of phoneme combinations and then computes their composite meaning according to some unknown procedure.
So while f and g are extensionally equivalent (up to the error allowed for =~) they differ intensionally.
However, to this date, nobody claiming that g is not how sequences of sounds are mapped to meaning, but rather that f is how it happens, ever provided any research papers to back this up.
Can you point out to me any papers that investigated this and tried to falsify the assumption that a function like g exists? i.e. that the meaning of atomic words is computed from some composition of its constituent phonemes.
phonology semantics psycholinguistics lexical-semantics cognitive-linguistics
tl;dr
Linguists like to claim that the mapping from sounds to word meanings is mostly arbitrary. Can you point out research that supports this claim? Specificllay I am looking for hard evidience in form of experimental research and not arm chair linguistics.
Details
Over the years I have repeatedly heard the claim, that the meaning of words is not compositionally computed from its constituent phonemes. In other words, the mapping from sounds to meaning is arbitrary. Whenever somebdy made this claim it obviously was always restricted to atomic words such as house or tree rather than compounds like treehouse. Nobody would argue the meaning of compounds is not derived from its constituent words.
Let me state this more formally, so we understand each other correctly.
(no latex support? really? uff...)
For atomic words (not compounds, not words with productive affixes like un, etc.), there is assumed to be a mapping
f : W_p --> W_s
where W_p denotes the set of phonological representations of the concepts of all words and W_s denotes the set of semantics of all words. The consensus seems to be that this mapping is just a big lookup table that contains only entire atomic words.
We do not assume the alternative function
g : P* --> W_s
where P* denotes the Keleene closure over the set of all phonemes (of a given language).
Let's also require that
forall w in W_p where w = p_0 .. p_n with p_i in P* . f(w) =~ g(p_0 .. p_n)
where =~ means approximately equal under some metric. For example the L2 norm of the difference vector of the vectors of f(w) and g(p_0 .. p_n) if f and g map into an n-dimensional vector space.
(We could define f(a) =~ g(b) to be true, iff g(b) is among the k closest vectors to f(a) for example)
g in contrast to f internally performs some computation on the sequence of n input phonemes p_0 .. p_n. It does not perform a simple lookup of p_0 .. p_n. It only looks up either single phonemes or a limited number of phoneme combinations and then computes their composite meaning according to some unknown procedure.
So while f and g are extensionally equivalent (up to the error allowed for =~) they differ intensionally.
However, to this date, nobody claiming that g is not how sequences of sounds are mapped to meaning, but rather that f is how it happens, ever provided any research papers to back this up.
Can you point out to me any papers that investigated this and tried to falsify the assumption that a function like g exists? i.e. that the meaning of atomic words is computed from some composition of its constituent phonemes.
phonology semantics psycholinguistics lexical-semantics cognitive-linguistics
phonology semantics psycholinguistics lexical-semantics cognitive-linguistics
edited May 3 at 15:42
lo tolmencre
asked May 2 at 17:46
lo tolmencrelo tolmencre
1475 bronze badges
1475 bronze badges
2
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
2
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
1
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
3
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
2
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06
|
show 13 more comments
2
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
2
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
1
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
3
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
2
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06
2
2
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
2
2
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
1
1
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
3
3
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
2
2
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06
|
show 13 more comments
5 Answers
5
active
oldest
votes
This is probably not the kind of answer you are looking for, but I guess the following two points would have to be considered as strong indications that meaning is not computed from phonology.
Polysemy (wood: the stuff a tree is made of as well as a collection of trees growing together) and homophony (pear, pair). This implies g is not a function. Also I don't know how the inverse of g works – how do speakers get from meanings to sounds?
What would the existence of a function such as g predict about language change, and how does it correspond to the kinds of changes we actually observe? Note especially the following two cases.
Changes in the phonological structure that are not accompanied by a change in meaning (e.g. metathesis third < OE þridda; cf. three) and vice versa (wicked went from morally bad to excellent). The former is very weird – why would g change in such a fashion that three retains its meaning while the very closely related phonological form that maps to the meaning of the corresponding ordinal changes (with all other words containing r retaining their meaning)? The weirdness of the latter lies in the fact that words that are substrings of a word that changes its meaning (e.g. wick to wicked) do not change along, nor do any other words that stand in a relationship of X : Xed.
Two more armchairy arguments:
Chomsky's question: How would a learner infer g? Looking at, for instance, but, butt, butter, buttress – is there any better strategy than memorization? Any other strategy at all?
Why do competent native speakers with a vocabulary exceeding ten thousand words still need to look up unfamiliar words? And what happens when they look up a word – is g adapted in some manner? Do the meanings of all other words consequently change?
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
|
show 8 more comments
Interestingly, it is so self-evident that the arbitrariness claim is true that nobody has experimentally verified the claim. But it would not be hard to do, if you have access to a captive subject pool. There are many procedures that could be followed, but the basic idea is to take recordings of actual words from various languages, present them (one at a time) to speakers of random languages (take note of what they speak), and have them assign a meaning to the words. Alternatively, give them a set of maybe 5 glosses (in their language), one of which is the correct translation and the others are randomly selected. For instance, a subject is presented with [goahti] and told to choose between "he ate; hut; running; lemur; until". The word is from North Saami and it means "hut". If there is a non-arbitrary sound-meaning relation, speakers (regardless of native language) should do better than chance in selecting the meaning, but if it is arbitrary, non-Saami speakers should perform at chance and Saami speakers should guess correctly very often. (You have to exclude people like me who know some Saami but don't actually speak it, and maybe exclude many Norwegians since it's one of those widely-known Saami words in Norway).
One big problem would be keeping track of crosslinguistically polysemous words. For instance, [moto] apparently means "blades of grass, trunks; falcons" in Japanese, "motorcycle" in various Romance languages, "person" in Lingala, "fire" in various other Bantu languages, "eye" in Tiruray. Also, Mongolian [xɛɮ] "language" counts a lot like "hell" to English speakers; Somali [maðaħ] "head" sounds a lot like "mother" and [naag] "woman" sounds a lot like English "nag". In scoring or setting up the stimuli, you'd need to filter out or somehow control for words of one language that sound similar-enough to words of a subjects language that they think it is a word of their language.
That is probably why nobody has done the experiment.
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
|
show 1 more comment
The prove for the claim is trivial. Words on the Swadesh list will show little correlation between meaning and phonetics, save for exceptions like mama. If there is a hidden correlation, then because the relation is more complicated.
EDIT: A weaker Argument would be constrained to a single language of a single speaker. I guess that's more or less what you mean. It's not quite clear what you mean, though. phone, word and set of semantics are not well defined as far as I know. That's in essence the same claim as you attack, if, whatever you refered to, was a response to a failed attempt trying to explain meaning from phonetics. Which would be called inductive reasoning, i.e. experience. The smallest constituent, of speech, the phone, ordered in sequences, is not enough to explain meaning, or to learn language.
add a comment
|
I believe you are conflating arbitrariness with other concepts in your question. Phonetic arbitrariness means that in a language, semantics are independent of the choice of phonetics.
First, let's talk about what choice of phonetics means. Any particular language has a finite number of phonemes, a discrete subset drawn from the continuous spectrum of articulatable human sounds. There are an infinite number of (for example) vowel sounds a human can possibly make, taking into account continuous features such as height, backness, pitch, length, intonation, stress, phonation, nasalization, rhotacization, etc. Each particular language divides up this continuous multi-dimensional spectrum of sounds into NV discrete volumes within this space; a binning process. We would call each volume a vowel (phoneme) and NV the "number of vowels" in the language. We can do the similar thing for all the consonants (in reality, at this level the distinction between vowels and consonants sort of vanishes) and arrive at N, the total number of phonemes in the language.
This set of phonemes constitutes the alphabet Σ of our language. In theoretical terms where a language L is a subset of words Σ∗ defined over an alphabet Σ, N is just the cardinality of Σ. The alphabet Σ is then some unique discretization x[N] of the infinite set of all possible phones U.
Phonetic arbitrariness means that the semantics of L are independent of my choice of discretization function x[ ]. Simply put, it doesn't matter semantically that I have happened to choose a,e,i,o,u as my vowels vs. i, ɪ, e, ɛ, æ, ɑ, o, u, ʊ, ʌ, ɚ, ə.
More editorial: once I have chosen a particular alphabet for my language, then I would say there is a set of heuristics for determining possible semantics based on words in that language. I won't go so far as to say there is a proper function, mapping, or even deterministic algorithm to get from words to meanings. Ambiguity, homophony, poor hearing conditions, etc. mean that listeners can be mistaken about a speaker's meaning, which points to this being a heuristic or probabilistic process rather than a deterministic one.
add a comment
|
If there's a function from all sequences of phonemes to meaning, then it should be possible to ask, e.g., what's the meaning of "kerblaxumenfar", and English speakers should be able to give a reasonable, consistent answer.
But the best answer will be "I don't know", and if you keep pressing, the answers you will get will be totally random. Name of some Amazonian tribe? A rare disease? A failed Silicon Valley startup?
So your function g becomes "a function from some subset of phoneme sequences to meanings", at which point it's becomes unclear how g is different from f, after all.
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
|
show 3 more comments
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "312"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2flinguistics.stackexchange.com%2fquestions%2f31320%2fword-meaning-as-function-of-the-composition-of-its-phonemes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
This is probably not the kind of answer you are looking for, but I guess the following two points would have to be considered as strong indications that meaning is not computed from phonology.
Polysemy (wood: the stuff a tree is made of as well as a collection of trees growing together) and homophony (pear, pair). This implies g is not a function. Also I don't know how the inverse of g works – how do speakers get from meanings to sounds?
What would the existence of a function such as g predict about language change, and how does it correspond to the kinds of changes we actually observe? Note especially the following two cases.
Changes in the phonological structure that are not accompanied by a change in meaning (e.g. metathesis third < OE þridda; cf. three) and vice versa (wicked went from morally bad to excellent). The former is very weird – why would g change in such a fashion that three retains its meaning while the very closely related phonological form that maps to the meaning of the corresponding ordinal changes (with all other words containing r retaining their meaning)? The weirdness of the latter lies in the fact that words that are substrings of a word that changes its meaning (e.g. wick to wicked) do not change along, nor do any other words that stand in a relationship of X : Xed.
Two more armchairy arguments:
Chomsky's question: How would a learner infer g? Looking at, for instance, but, butt, butter, buttress – is there any better strategy than memorization? Any other strategy at all?
Why do competent native speakers with a vocabulary exceeding ten thousand words still need to look up unfamiliar words? And what happens when they look up a word – is g adapted in some manner? Do the meanings of all other words consequently change?
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
|
show 8 more comments
This is probably not the kind of answer you are looking for, but I guess the following two points would have to be considered as strong indications that meaning is not computed from phonology.
Polysemy (wood: the stuff a tree is made of as well as a collection of trees growing together) and homophony (pear, pair). This implies g is not a function. Also I don't know how the inverse of g works – how do speakers get from meanings to sounds?
What would the existence of a function such as g predict about language change, and how does it correspond to the kinds of changes we actually observe? Note especially the following two cases.
Changes in the phonological structure that are not accompanied by a change in meaning (e.g. metathesis third < OE þridda; cf. three) and vice versa (wicked went from morally bad to excellent). The former is very weird – why would g change in such a fashion that three retains its meaning while the very closely related phonological form that maps to the meaning of the corresponding ordinal changes (with all other words containing r retaining their meaning)? The weirdness of the latter lies in the fact that words that are substrings of a word that changes its meaning (e.g. wick to wicked) do not change along, nor do any other words that stand in a relationship of X : Xed.
Two more armchairy arguments:
Chomsky's question: How would a learner infer g? Looking at, for instance, but, butt, butter, buttress – is there any better strategy than memorization? Any other strategy at all?
Why do competent native speakers with a vocabulary exceeding ten thousand words still need to look up unfamiliar words? And what happens when they look up a word – is g adapted in some manner? Do the meanings of all other words consequently change?
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
|
show 8 more comments
This is probably not the kind of answer you are looking for, but I guess the following two points would have to be considered as strong indications that meaning is not computed from phonology.
Polysemy (wood: the stuff a tree is made of as well as a collection of trees growing together) and homophony (pear, pair). This implies g is not a function. Also I don't know how the inverse of g works – how do speakers get from meanings to sounds?
What would the existence of a function such as g predict about language change, and how does it correspond to the kinds of changes we actually observe? Note especially the following two cases.
Changes in the phonological structure that are not accompanied by a change in meaning (e.g. metathesis third < OE þridda; cf. three) and vice versa (wicked went from morally bad to excellent). The former is very weird – why would g change in such a fashion that three retains its meaning while the very closely related phonological form that maps to the meaning of the corresponding ordinal changes (with all other words containing r retaining their meaning)? The weirdness of the latter lies in the fact that words that are substrings of a word that changes its meaning (e.g. wick to wicked) do not change along, nor do any other words that stand in a relationship of X : Xed.
Two more armchairy arguments:
Chomsky's question: How would a learner infer g? Looking at, for instance, but, butt, butter, buttress – is there any better strategy than memorization? Any other strategy at all?
Why do competent native speakers with a vocabulary exceeding ten thousand words still need to look up unfamiliar words? And what happens when they look up a word – is g adapted in some manner? Do the meanings of all other words consequently change?
This is probably not the kind of answer you are looking for, but I guess the following two points would have to be considered as strong indications that meaning is not computed from phonology.
Polysemy (wood: the stuff a tree is made of as well as a collection of trees growing together) and homophony (pear, pair). This implies g is not a function. Also I don't know how the inverse of g works – how do speakers get from meanings to sounds?
What would the existence of a function such as g predict about language change, and how does it correspond to the kinds of changes we actually observe? Note especially the following two cases.
Changes in the phonological structure that are not accompanied by a change in meaning (e.g. metathesis third < OE þridda; cf. three) and vice versa (wicked went from morally bad to excellent). The former is very weird – why would g change in such a fashion that three retains its meaning while the very closely related phonological form that maps to the meaning of the corresponding ordinal changes (with all other words containing r retaining their meaning)? The weirdness of the latter lies in the fact that words that are substrings of a word that changes its meaning (e.g. wick to wicked) do not change along, nor do any other words that stand in a relationship of X : Xed.
Two more armchairy arguments:
Chomsky's question: How would a learner infer g? Looking at, for instance, but, butt, butter, buttress – is there any better strategy than memorization? Any other strategy at all?
Why do competent native speakers with a vocabulary exceeding ten thousand words still need to look up unfamiliar words? And what happens when they look up a word – is g adapted in some manner? Do the meanings of all other words consequently change?
edited May 2 at 21:07
answered May 2 at 19:26
David VogtDavid Vogt
4071 silver badge6 bronze badges
4071 silver badge6 bronze badges
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
|
show 8 more comments
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
I wanted to comment on the same problem regarding unambiguity of the functions f and g, but I can't argue against a mathematical formula. The target set doesn't have to be flat, the elements of the set don't have to be primitives. The claim in question, and the attacked claim, are both so weakly defined, that they are arbitrary.
– vectory
May 2 at 19:59
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
Even if g can do complex stuff (e.g. but is a substring of butt but the meanings are as different as they could be) there's still the question: Well, what kind of linguistic changes would we likely observe given g? I have no idea. But the changes we do observe fit the assumption of arbitrariness quite well ("What, we say aks instead of ask know? Okay, cool.").
– David Vogt
May 2 at 20:03
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
You are right, homophones would make f and g not functions. We can correct that by letting them map to sets of word meanings rather than to individual word meanings. You also assume g to be very precise in its mapping to word meanings. g could be vague, as suggested by my definition of =~. If it is vague enough and the mapping very complicated, cases such as wicked do not necessarily disprove g's existence.
– lo tolmencre
May 2 at 20:16
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
Why should inferring g be a problem? phonemes could be correlated with co-occuring phonemes relative to word meanings somehow.
– lo tolmencre
May 2 at 20:19
1
1
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
1) I look up words frequently, and there are a lot of simplexes among them (as complex words, as you indicated, often have compositional meanings). 2) Wouldn't the detection of the kind of correlations you mention require knowing a vast set of arbitrary sound-meaning-correspondences? 3) wick : wicked in itself would not be a problem, but what if the meaning of one element changes while the meaning of other elements of the form X : Xed stays constant? Does the assumption of sound-meaning-correspondence predict that these kinds of changes happen?
– David Vogt
May 2 at 20:33
|
show 8 more comments
Interestingly, it is so self-evident that the arbitrariness claim is true that nobody has experimentally verified the claim. But it would not be hard to do, if you have access to a captive subject pool. There are many procedures that could be followed, but the basic idea is to take recordings of actual words from various languages, present them (one at a time) to speakers of random languages (take note of what they speak), and have them assign a meaning to the words. Alternatively, give them a set of maybe 5 glosses (in their language), one of which is the correct translation and the others are randomly selected. For instance, a subject is presented with [goahti] and told to choose between "he ate; hut; running; lemur; until". The word is from North Saami and it means "hut". If there is a non-arbitrary sound-meaning relation, speakers (regardless of native language) should do better than chance in selecting the meaning, but if it is arbitrary, non-Saami speakers should perform at chance and Saami speakers should guess correctly very often. (You have to exclude people like me who know some Saami but don't actually speak it, and maybe exclude many Norwegians since it's one of those widely-known Saami words in Norway).
One big problem would be keeping track of crosslinguistically polysemous words. For instance, [moto] apparently means "blades of grass, trunks; falcons" in Japanese, "motorcycle" in various Romance languages, "person" in Lingala, "fire" in various other Bantu languages, "eye" in Tiruray. Also, Mongolian [xɛɮ] "language" counts a lot like "hell" to English speakers; Somali [maðaħ] "head" sounds a lot like "mother" and [naag] "woman" sounds a lot like English "nag". In scoring or setting up the stimuli, you'd need to filter out or somehow control for words of one language that sound similar-enough to words of a subjects language that they think it is a word of their language.
That is probably why nobody has done the experiment.
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
|
show 1 more comment
Interestingly, it is so self-evident that the arbitrariness claim is true that nobody has experimentally verified the claim. But it would not be hard to do, if you have access to a captive subject pool. There are many procedures that could be followed, but the basic idea is to take recordings of actual words from various languages, present them (one at a time) to speakers of random languages (take note of what they speak), and have them assign a meaning to the words. Alternatively, give them a set of maybe 5 glosses (in their language), one of which is the correct translation and the others are randomly selected. For instance, a subject is presented with [goahti] and told to choose between "he ate; hut; running; lemur; until". The word is from North Saami and it means "hut". If there is a non-arbitrary sound-meaning relation, speakers (regardless of native language) should do better than chance in selecting the meaning, but if it is arbitrary, non-Saami speakers should perform at chance and Saami speakers should guess correctly very often. (You have to exclude people like me who know some Saami but don't actually speak it, and maybe exclude many Norwegians since it's one of those widely-known Saami words in Norway).
One big problem would be keeping track of crosslinguistically polysemous words. For instance, [moto] apparently means "blades of grass, trunks; falcons" in Japanese, "motorcycle" in various Romance languages, "person" in Lingala, "fire" in various other Bantu languages, "eye" in Tiruray. Also, Mongolian [xɛɮ] "language" counts a lot like "hell" to English speakers; Somali [maðaħ] "head" sounds a lot like "mother" and [naag] "woman" sounds a lot like English "nag". In scoring or setting up the stimuli, you'd need to filter out or somehow control for words of one language that sound similar-enough to words of a subjects language that they think it is a word of their language.
That is probably why nobody has done the experiment.
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
|
show 1 more comment
Interestingly, it is so self-evident that the arbitrariness claim is true that nobody has experimentally verified the claim. But it would not be hard to do, if you have access to a captive subject pool. There are many procedures that could be followed, but the basic idea is to take recordings of actual words from various languages, present them (one at a time) to speakers of random languages (take note of what they speak), and have them assign a meaning to the words. Alternatively, give them a set of maybe 5 glosses (in their language), one of which is the correct translation and the others are randomly selected. For instance, a subject is presented with [goahti] and told to choose between "he ate; hut; running; lemur; until". The word is from North Saami and it means "hut". If there is a non-arbitrary sound-meaning relation, speakers (regardless of native language) should do better than chance in selecting the meaning, but if it is arbitrary, non-Saami speakers should perform at chance and Saami speakers should guess correctly very often. (You have to exclude people like me who know some Saami but don't actually speak it, and maybe exclude many Norwegians since it's one of those widely-known Saami words in Norway).
One big problem would be keeping track of crosslinguistically polysemous words. For instance, [moto] apparently means "blades of grass, trunks; falcons" in Japanese, "motorcycle" in various Romance languages, "person" in Lingala, "fire" in various other Bantu languages, "eye" in Tiruray. Also, Mongolian [xɛɮ] "language" counts a lot like "hell" to English speakers; Somali [maðaħ] "head" sounds a lot like "mother" and [naag] "woman" sounds a lot like English "nag". In scoring or setting up the stimuli, you'd need to filter out or somehow control for words of one language that sound similar-enough to words of a subjects language that they think it is a word of their language.
That is probably why nobody has done the experiment.
Interestingly, it is so self-evident that the arbitrariness claim is true that nobody has experimentally verified the claim. But it would not be hard to do, if you have access to a captive subject pool. There are many procedures that could be followed, but the basic idea is to take recordings of actual words from various languages, present them (one at a time) to speakers of random languages (take note of what they speak), and have them assign a meaning to the words. Alternatively, give them a set of maybe 5 glosses (in their language), one of which is the correct translation and the others are randomly selected. For instance, a subject is presented with [goahti] and told to choose between "he ate; hut; running; lemur; until". The word is from North Saami and it means "hut". If there is a non-arbitrary sound-meaning relation, speakers (regardless of native language) should do better than chance in selecting the meaning, but if it is arbitrary, non-Saami speakers should perform at chance and Saami speakers should guess correctly very often. (You have to exclude people like me who know some Saami but don't actually speak it, and maybe exclude many Norwegians since it's one of those widely-known Saami words in Norway).
One big problem would be keeping track of crosslinguistically polysemous words. For instance, [moto] apparently means "blades of grass, trunks; falcons" in Japanese, "motorcycle" in various Romance languages, "person" in Lingala, "fire" in various other Bantu languages, "eye" in Tiruray. Also, Mongolian [xɛɮ] "language" counts a lot like "hell" to English speakers; Somali [maðaħ] "head" sounds a lot like "mother" and [naag] "woman" sounds a lot like English "nag". In scoring or setting up the stimuli, you'd need to filter out or somehow control for words of one language that sound similar-enough to words of a subjects language that they think it is a word of their language.
That is probably why nobody has done the experiment.
answered May 2 at 19:37
user6726user6726
39.8k1 gold badge27 silver badges78 bronze badges
39.8k1 gold badge27 silver badges78 bronze badges
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
|
show 1 more comment
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
1
1
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
You are assuming a universal g here though. There could be per language or maybe even per speaker gs. Different languages encode the same concepts differently (with different words). So they might also have their own g that translates phoneme sequences to word meanings, just like they have their own syntax, phonotactics etc.
– lo tolmencre
May 2 at 20:24
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
I see no way that the question can even be experimentally tested if you don't define what it mean to be "arbitrary". If a person knows the word "dog", they know the meaning and sound of "dog". If they don't know the word "dog", then no experiment can establish that their response on some test is "arbitrary" versus "non-arbitrary". Also, I don't assume g at all, You are smuggling your conclusions into the premise, so I'm doing without your claimed functions. I'm just answering the titular question. You need to prove that your formulae are entailed by the question.
– user6726
May 3 at 19:23
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
With arbitrary I am basically referring Saussure's of arbitrariness of the sign. How am I smuggling any conclusion into a premise if I don't even have a conclusion? No idea what you mean. My formulas simply restate what I wrote in prose.
– lo tolmencre
May 3 at 19:35
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
Also, who says you need to perform experiments with people? You could train statistical models to operate on phoneme representations of texts and on grapheme representations of texts and look for differences in their performance. If the phoneme model outperforms the grapheme model, the model might have found a function that semantically composes phonemes more effectively than graphemes. That is not a proof of g --- which is impossible anyway --- but at least possibly some evidence and reason to dig further.
– lo tolmencre
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
I thought you were looking for something beyond armchair methods. If you have the math (the ultimate armchair method) right, you don't need to train anything. I'm using arbitrary in Saussure's sense.
– user6726
May 3 at 19:48
|
show 1 more comment
The prove for the claim is trivial. Words on the Swadesh list will show little correlation between meaning and phonetics, save for exceptions like mama. If there is a hidden correlation, then because the relation is more complicated.
EDIT: A weaker Argument would be constrained to a single language of a single speaker. I guess that's more or less what you mean. It's not quite clear what you mean, though. phone, word and set of semantics are not well defined as far as I know. That's in essence the same claim as you attack, if, whatever you refered to, was a response to a failed attempt trying to explain meaning from phonetics. Which would be called inductive reasoning, i.e. experience. The smallest constituent, of speech, the phone, ordered in sequences, is not enough to explain meaning, or to learn language.
add a comment
|
The prove for the claim is trivial. Words on the Swadesh list will show little correlation between meaning and phonetics, save for exceptions like mama. If there is a hidden correlation, then because the relation is more complicated.
EDIT: A weaker Argument would be constrained to a single language of a single speaker. I guess that's more or less what you mean. It's not quite clear what you mean, though. phone, word and set of semantics are not well defined as far as I know. That's in essence the same claim as you attack, if, whatever you refered to, was a response to a failed attempt trying to explain meaning from phonetics. Which would be called inductive reasoning, i.e. experience. The smallest constituent, of speech, the phone, ordered in sequences, is not enough to explain meaning, or to learn language.
add a comment
|
The prove for the claim is trivial. Words on the Swadesh list will show little correlation between meaning and phonetics, save for exceptions like mama. If there is a hidden correlation, then because the relation is more complicated.
EDIT: A weaker Argument would be constrained to a single language of a single speaker. I guess that's more or less what you mean. It's not quite clear what you mean, though. phone, word and set of semantics are not well defined as far as I know. That's in essence the same claim as you attack, if, whatever you refered to, was a response to a failed attempt trying to explain meaning from phonetics. Which would be called inductive reasoning, i.e. experience. The smallest constituent, of speech, the phone, ordered in sequences, is not enough to explain meaning, or to learn language.
The prove for the claim is trivial. Words on the Swadesh list will show little correlation between meaning and phonetics, save for exceptions like mama. If there is a hidden correlation, then because the relation is more complicated.
EDIT: A weaker Argument would be constrained to a single language of a single speaker. I guess that's more or less what you mean. It's not quite clear what you mean, though. phone, word and set of semantics are not well defined as far as I know. That's in essence the same claim as you attack, if, whatever you refered to, was a response to a failed attempt trying to explain meaning from phonetics. Which would be called inductive reasoning, i.e. experience. The smallest constituent, of speech, the phone, ordered in sequences, is not enough to explain meaning, or to learn language.
edited May 2 at 19:50
answered May 2 at 19:40
vectoryvectory
6561 silver badge13 bronze badges
6561 silver badge13 bronze badges
add a comment
|
add a comment
|
I believe you are conflating arbitrariness with other concepts in your question. Phonetic arbitrariness means that in a language, semantics are independent of the choice of phonetics.
First, let's talk about what choice of phonetics means. Any particular language has a finite number of phonemes, a discrete subset drawn from the continuous spectrum of articulatable human sounds. There are an infinite number of (for example) vowel sounds a human can possibly make, taking into account continuous features such as height, backness, pitch, length, intonation, stress, phonation, nasalization, rhotacization, etc. Each particular language divides up this continuous multi-dimensional spectrum of sounds into NV discrete volumes within this space; a binning process. We would call each volume a vowel (phoneme) and NV the "number of vowels" in the language. We can do the similar thing for all the consonants (in reality, at this level the distinction between vowels and consonants sort of vanishes) and arrive at N, the total number of phonemes in the language.
This set of phonemes constitutes the alphabet Σ of our language. In theoretical terms where a language L is a subset of words Σ∗ defined over an alphabet Σ, N is just the cardinality of Σ. The alphabet Σ is then some unique discretization x[N] of the infinite set of all possible phones U.
Phonetic arbitrariness means that the semantics of L are independent of my choice of discretization function x[ ]. Simply put, it doesn't matter semantically that I have happened to choose a,e,i,o,u as my vowels vs. i, ɪ, e, ɛ, æ, ɑ, o, u, ʊ, ʌ, ɚ, ə.
More editorial: once I have chosen a particular alphabet for my language, then I would say there is a set of heuristics for determining possible semantics based on words in that language. I won't go so far as to say there is a proper function, mapping, or even deterministic algorithm to get from words to meanings. Ambiguity, homophony, poor hearing conditions, etc. mean that listeners can be mistaken about a speaker's meaning, which points to this being a heuristic or probabilistic process rather than a deterministic one.
add a comment
|
I believe you are conflating arbitrariness with other concepts in your question. Phonetic arbitrariness means that in a language, semantics are independent of the choice of phonetics.
First, let's talk about what choice of phonetics means. Any particular language has a finite number of phonemes, a discrete subset drawn from the continuous spectrum of articulatable human sounds. There are an infinite number of (for example) vowel sounds a human can possibly make, taking into account continuous features such as height, backness, pitch, length, intonation, stress, phonation, nasalization, rhotacization, etc. Each particular language divides up this continuous multi-dimensional spectrum of sounds into NV discrete volumes within this space; a binning process. We would call each volume a vowel (phoneme) and NV the "number of vowels" in the language. We can do the similar thing for all the consonants (in reality, at this level the distinction between vowels and consonants sort of vanishes) and arrive at N, the total number of phonemes in the language.
This set of phonemes constitutes the alphabet Σ of our language. In theoretical terms where a language L is a subset of words Σ∗ defined over an alphabet Σ, N is just the cardinality of Σ. The alphabet Σ is then some unique discretization x[N] of the infinite set of all possible phones U.
Phonetic arbitrariness means that the semantics of L are independent of my choice of discretization function x[ ]. Simply put, it doesn't matter semantically that I have happened to choose a,e,i,o,u as my vowels vs. i, ɪ, e, ɛ, æ, ɑ, o, u, ʊ, ʌ, ɚ, ə.
More editorial: once I have chosen a particular alphabet for my language, then I would say there is a set of heuristics for determining possible semantics based on words in that language. I won't go so far as to say there is a proper function, mapping, or even deterministic algorithm to get from words to meanings. Ambiguity, homophony, poor hearing conditions, etc. mean that listeners can be mistaken about a speaker's meaning, which points to this being a heuristic or probabilistic process rather than a deterministic one.
add a comment
|
I believe you are conflating arbitrariness with other concepts in your question. Phonetic arbitrariness means that in a language, semantics are independent of the choice of phonetics.
First, let's talk about what choice of phonetics means. Any particular language has a finite number of phonemes, a discrete subset drawn from the continuous spectrum of articulatable human sounds. There are an infinite number of (for example) vowel sounds a human can possibly make, taking into account continuous features such as height, backness, pitch, length, intonation, stress, phonation, nasalization, rhotacization, etc. Each particular language divides up this continuous multi-dimensional spectrum of sounds into NV discrete volumes within this space; a binning process. We would call each volume a vowel (phoneme) and NV the "number of vowels" in the language. We can do the similar thing for all the consonants (in reality, at this level the distinction between vowels and consonants sort of vanishes) and arrive at N, the total number of phonemes in the language.
This set of phonemes constitutes the alphabet Σ of our language. In theoretical terms where a language L is a subset of words Σ∗ defined over an alphabet Σ, N is just the cardinality of Σ. The alphabet Σ is then some unique discretization x[N] of the infinite set of all possible phones U.
Phonetic arbitrariness means that the semantics of L are independent of my choice of discretization function x[ ]. Simply put, it doesn't matter semantically that I have happened to choose a,e,i,o,u as my vowels vs. i, ɪ, e, ɛ, æ, ɑ, o, u, ʊ, ʌ, ɚ, ə.
More editorial: once I have chosen a particular alphabet for my language, then I would say there is a set of heuristics for determining possible semantics based on words in that language. I won't go so far as to say there is a proper function, mapping, or even deterministic algorithm to get from words to meanings. Ambiguity, homophony, poor hearing conditions, etc. mean that listeners can be mistaken about a speaker's meaning, which points to this being a heuristic or probabilistic process rather than a deterministic one.
I believe you are conflating arbitrariness with other concepts in your question. Phonetic arbitrariness means that in a language, semantics are independent of the choice of phonetics.
First, let's talk about what choice of phonetics means. Any particular language has a finite number of phonemes, a discrete subset drawn from the continuous spectrum of articulatable human sounds. There are an infinite number of (for example) vowel sounds a human can possibly make, taking into account continuous features such as height, backness, pitch, length, intonation, stress, phonation, nasalization, rhotacization, etc. Each particular language divides up this continuous multi-dimensional spectrum of sounds into NV discrete volumes within this space; a binning process. We would call each volume a vowel (phoneme) and NV the "number of vowels" in the language. We can do the similar thing for all the consonants (in reality, at this level the distinction between vowels and consonants sort of vanishes) and arrive at N, the total number of phonemes in the language.
This set of phonemes constitutes the alphabet Σ of our language. In theoretical terms where a language L is a subset of words Σ∗ defined over an alphabet Σ, N is just the cardinality of Σ. The alphabet Σ is then some unique discretization x[N] of the infinite set of all possible phones U.
Phonetic arbitrariness means that the semantics of L are independent of my choice of discretization function x[ ]. Simply put, it doesn't matter semantically that I have happened to choose a,e,i,o,u as my vowels vs. i, ɪ, e, ɛ, æ, ɑ, o, u, ʊ, ʌ, ɚ, ə.
More editorial: once I have chosen a particular alphabet for my language, then I would say there is a set of heuristics for determining possible semantics based on words in that language. I won't go so far as to say there is a proper function, mapping, or even deterministic algorithm to get from words to meanings. Ambiguity, homophony, poor hearing conditions, etc. mean that listeners can be mistaken about a speaker's meaning, which points to this being a heuristic or probabilistic process rather than a deterministic one.
answered May 2 at 22:54
Mark BeadlesMark Beadles
6,1302 gold badges19 silver badges43 bronze badges
6,1302 gold badges19 silver badges43 bronze badges
add a comment
|
add a comment
|
If there's a function from all sequences of phonemes to meaning, then it should be possible to ask, e.g., what's the meaning of "kerblaxumenfar", and English speakers should be able to give a reasonable, consistent answer.
But the best answer will be "I don't know", and if you keep pressing, the answers you will get will be totally random. Name of some Amazonian tribe? A rare disease? A failed Silicon Valley startup?
So your function g becomes "a function from some subset of phoneme sequences to meanings", at which point it's becomes unclear how g is different from f, after all.
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
|
show 3 more comments
If there's a function from all sequences of phonemes to meaning, then it should be possible to ask, e.g., what's the meaning of "kerblaxumenfar", and English speakers should be able to give a reasonable, consistent answer.
But the best answer will be "I don't know", and if you keep pressing, the answers you will get will be totally random. Name of some Amazonian tribe? A rare disease? A failed Silicon Valley startup?
So your function g becomes "a function from some subset of phoneme sequences to meanings", at which point it's becomes unclear how g is different from f, after all.
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
|
show 3 more comments
If there's a function from all sequences of phonemes to meaning, then it should be possible to ask, e.g., what's the meaning of "kerblaxumenfar", and English speakers should be able to give a reasonable, consistent answer.
But the best answer will be "I don't know", and if you keep pressing, the answers you will get will be totally random. Name of some Amazonian tribe? A rare disease? A failed Silicon Valley startup?
So your function g becomes "a function from some subset of phoneme sequences to meanings", at which point it's becomes unclear how g is different from f, after all.
If there's a function from all sequences of phonemes to meaning, then it should be possible to ask, e.g., what's the meaning of "kerblaxumenfar", and English speakers should be able to give a reasonable, consistent answer.
But the best answer will be "I don't know", and if you keep pressing, the answers you will get will be totally random. Name of some Amazonian tribe? A rare disease? A failed Silicon Valley startup?
So your function g becomes "a function from some subset of phoneme sequences to meanings", at which point it's becomes unclear how g is different from f, after all.
answered May 3 at 19:10
jickjick
3173 silver badges5 bronze badges
3173 silver badges5 bronze badges
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
|
show 3 more comments
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
I never said g was a total function. I considered it obvious that g only operates on the elements from P* that correspond to some element of W_p and thought it would be unnecessarily verbose to specify that. But that does not make it unclear how g is different from f. I stated in my question that f performs a simple lookup while g computes the meaning of its input compositionally.
– lo tolmencre
May 3 at 19:40
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
Any sufficiently large lookup table can emulate a "compositional" function, and any sufficiently complicated function can emulate a lookup table, especially if the domain is finite. So how are you going to design an experiment that can test whether people are using f or g? If there's no such experiment, then it just boils down to "which one is a more useful abstraction for human languages?"
– jick
May 3 at 20:01
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
See my suggestion under answer linguistics.stackexchange.com/a/31322/10543
– lo tolmencre
May 3 at 20:03
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I think we're talking past each other. I'm specifically not asking which one is a more useful abstraction. I'm asking: what is a hypothetical scenario/phenomenon that people may do that will allow us to conclude "this cannot happen if people are using f (or g) inside their minds, therefore g (or f) must be happening."?
– jick
May 3 at 20:07
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
I understand. And I can think of an experiment with people directly. But I can also think of a statistical approach that might be less prone to error; one that uses two different representations of what people produce on speech, as described in my other comment. My reasoning is as follows: We agree, that many languages have inconsistent spelling. We also agree, that phonemic transcription is a consistent representation of words.If a model operating on phonemes outperformed a model operating on graphemes, what could be the reason?
– lo tolmencre
May 3 at 23:26
|
show 3 more comments
Thanks for contributing an answer to Linguistics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2flinguistics.stackexchange.com%2fquestions%2f31320%2fword-meaning-as-function-of-the-composition-of-its-phonemes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
You would need to show that transitive closure. You are making a formal argument, but the claim that you attack is not a formal one, the way you present it. I'll just claim that your premises is potentially flawed, until proven otherwise. That's just not how it works. [cont]
– vectory
May 2 at 20:37
2
You are essentially still trying to understand what they said, who said it and in what context (otherwise give a dog a bone). There's no need to reject the claim as you seem to out of fear that it contradicts your intuition, if you don't know what the claim is.
– vectory
May 2 at 20:48
1
I don't understand what you are saying, sorry.
– lo tolmencre
May 2 at 20:52
3
Transitivity and reflexivity are are properties of relations. A phoneme is not a relation. Uness you somehow redefine the linguistic concept of a phoneme or the mathematical concept of a relation, which you didn't, your "transitive and reflexive closure over the set of phonemes" is just pseudo-formal jabber that can't even possibly exist.
– lemontree♦
May 2 at 21:03
2
You may cast your vote for LaTeX formatting here: linguistics.meta.stackexchange.com/questions/509/…
– lemontree♦
May 2 at 21:06