(un)emotional investment

Here’s a spin on the depressive realism story. Shiv et al (2005) found that substance abusers and those with brain damage affecting their emotions had enhanced performance on an investment task. According to the authors of the study, the normal controls were actually distracted from making optimum decisions by their emotional involvement in the task.

‘The dark side of emotion in decision-making: When individuals with decreased emotional reactions make more advantageous decisions’ Baba Shiv, George Loewenstein and Antoine Bechara. Cognitive Brain Research, 23(1), April 2005, Pages 85-92. summary here

Abstract:

Can dysfunction in neural systems subserving emotion lead, under certain circumstances, to more advantageous decisions? To answer this question, we investigated how individuals with substance dependence (ISD), patients with stable focal lesions in brain regions related to emotion (lesion patients), and normal participants (normal controls) made 20 rounds of investment decisions. Like lesion patients, ISD made more advantageous decisions and ultimately earned more money from their investments than the normal controls. When normal controls either won or lost money on an investment round, they adopted a conservative strategy and became more reluctant to invest on the subsequent round, suggesting that they were more affected than lesion patients and ISD by the outcomes of decisions made in the previous rounds.

Link: a related post at mindhacks.com

Why can’t we choose what makes us happy?

This from Hsee, C. K. & Hastie, R. (2006). Decision and experience: Why don’t we choose what makes us happy? Trends in Cognitive Sciences, 10(1), 31-37


Another common belief is that more choice options are always better. In reality, having more options can lead to worse experiences. For example, if employees are given a free trip to Paris, they are happy; if they are given a free trip to Hawaii, they are happy. But if they are given a choice between the two trips, they will be less happy, no matter which option they choose. Having the choice highlights the relative deficiencies in each option. People who choose Paris complain that ‘Paris does not have the ocean’, whereas people who choose Hawaii complain that ‘Hawaii does not have great museums’ .

(my emphasis)

The reference is:
Luce, M.K. et al. (2001) The impact of emotional tradeoff difficulty on decision behavior. In Conflict and Tradeoffs in Decision Making (Weber, E.U. and Baron, J., eds), pp. 86–109, Cambridge University Press

Seems opportunity cost isn’t just something that bothers economists!

the endowment effect & marketing

The endowment effect is that we value more highly what we already have. It’s a variation on the status quo bias that we talk about in Mind Hacks (Hack #74). This cognitive bias is of particular interest to economists, because it has implications for how eonomies work. If it is strongly in effect then people will trade less than is required to bring about the optimal resource allocation that free market’s are theoretically capable of. The most famous demonstration of the endowment effect directly addresses the operation of the endowment effect in a market trading situation [1] – showing that even though preferences for a small arbitrary item (a coffee mug) are randomly distributed, if you give half of the group one and allow them to trade less trading happens than you would predict. In other words more people want to hold on to their mug now they’ve got one, than people without a mug want to get hold of one. The preferences of the group have been realigned according to initial resource distribution.

This is all relevant to marketing, as well as economics of course. You can see why car-salespeople are keen for you to take a test-drive before you purchase, or why shops are happy to offer a money-back-with-no-questions-asked option. You figure the money-back option into your cost-benefit calculation about whether to take something home, but once you’ve got it home your preferences realign – that item is now “yours”, so you’re far less likely to take it back to the shop, even if it doesn’t turn out to be as good as you thought when you bought it.

Refs and Links:

[1] Kahneman, D., J.L. Knetsch and R.H. Thaler (1990). Experimental Tests of the Endowment Effect and the Coase Theorem. Journal of Political Economy. link
Wikipedia: The Endowment effect: : link
Experienced traders can overcome the endowment effect : Economist article
References at behaviouralfinance.net

[Cross-posted at idiolect.org.uk]

the price is right regardless of the cost

Zac at ortholog.com writes about an experimental test of buying irrationality using Ebay. Quoting:


Test auctions on eBay showed that most people prefer to pay a low price for an item and also pay postage (American: "shipping") than pay a higher price and get free postage, even when the former added up to more than the latter. A CD for $5+$6 postage is preferred to a CD for $10+freepost. It wasn’t presented as that stark a choice: multiple auctions with different price-postage ratios revealed a net preference for low item price and a poor correlation between auction success and stated postage costs. Interesting but hardly surprising: the salience of the price is greater than the cost of shipping (the anchoring cognitive fallacy), and people in general are not as rational or systematic as they/we believe.

(Zac’s links. read the full post here)

In Influence, Cialdini highlights scarcity as one of the six principle factors of persuasion. In an auction they combine particularly strongly: scarcity of time (the item is only on sale for a limited period), scarity of product (items are sold individually, not just as one-of-many ‘off the shelf’) and competition (from other buyers). Add to this heady mix the price/postage sleight of hand and it is no wonder you get choice irrationalities.

Influence (by Robert Cialdini)

Influence by Robert Cialdini is an excellent, excellent, book. Not only does it present voluminous evidence on the social psychology of persuasion and compliance, but it does succinctly and engagingly, mixing academic references with historical vignettes and personal anecdotes. The book discuss how techniques of persuasion work, grouping them under six major headings, and for each heading the book provides a ‘defence against’ section detailing how to stop yourself being unduly influenced. The final, glorious, touch is that in order to write the book Cialdini – who is a professor of social psychology – engaged in a three-year project of going undercover to explore first-hand how techniques of persuasion are used in the real world: applying for a waiter’s job to study how to increase customers’ tipping, attending tupperware parties, going on training programmes with door-to-door salesmen…it makes the book a wonderful blend of thorough research and astutely observed practice.

The book has been extensively and excellently summarised here, at happening-here.blogspot.com, so I’m just going to pull out some particularly fun examples of persuasion techniques, particularly as the relate to advertising and marketing.

Notes on Cialdini, R.B. (2001). Influence: Science and Practice. Forth Edition. Allyn & Bacon

A key idea is that we all use various cognitive ‘shortcuts’ (heuristics) we use to decide on what to buy. Advertisers can take advantage of these short-cuts to skew our behaviour. For example, there is a price-as-an-indicator-of-quality heurstic which means, if we’re not thinking carefully about a purchase decision, we might just use the assumption that ‚Äúbetter things are more expensive‚Äù, so if we want a ‘better’ thing we will just look at the prices to work out which product is better.

[Chivas Regal Scotch Whiskey] “had been a struggling brand until its managers decided to raise its price to a level far above its competitors. Sales skyrocketed, even though nothing was changed in the product itself (Aaker, 1991)” [1]

Or the coupons-give-you-a-bargain heuristic:

“A tire company found that mailed-out coupons which, because of a printing error, offered no savings top recipients produced just as much customers response as did error-free coupons that offered substantial savings” [2]

It’s easy enough to think of other common examples – supermarkets which use three for the price of two offers, or put up signs saying things like “Two for ¬£1”. Next time you see one of these check the price for how much just one costs – it might stem your enthusiasm for the seeming bargain you thought you were being offered

Here’s another trick, which takes advantage of another natural inclination – that of sticking by our word. Cialdini accuses toy producers of undersupplying stores with ‘craze’ toys just before Christmas – after a barrage of advertising parents promise their kids the toy but then can’t get hold of one. They buy them a substitute at Christmas and then also have to buy the craze toy in January. He cites the example of the Cabbage Patch Kids, dolls which were heavily advertised one year in the mid-1980s, and undersupplied during the holiday season. $25 toys were selling at auction for $700. (A charge was later brought against company for advertising something that was unavailable). In 1988, a spokesperson for Hasbo, which made the Furby toy (which also sold out at Christmas), advised parents to say I’ll try, but if I can’t get it for you now, I’ll get it for you later [3]

The same consistency principle lies behind the advice an encyclopaedia company gives during its sales-program: make the customs fill out the sales agreements themselves. Once they’ve ‘owned’ the action by doing it themselves they are far more likely to stick by it. (“There is something magical about writing things down” says Amway Corporation literature). Cialdini explains the popularity (with companies) of testimonial contests ‚Äì those where you think of 50 words why the product is good and stand a chance of winning something. The contest is not for the company to get a single winning entry, but for them to induce all the entrants of the competition to enhance their commitment to the product by writing a testimonial. Influence has an extended discussion of this, and how the power of small, initial, public voluntary actions can be used to produce later compliance to much larger requests for action

“Commitment decisions, even erroneous ones, have a tendency to be self-perpetuating because they can ‘grow their own legs'”
(page 97)

“You can use small commitments to manipulate a person’s self-image; you can use them to turn citizens into “public servants”, prospects into “customers”, prisoners into “collaborators.” And once you’ve got a man’s self-image where you want it, he should comply naturally with a whole range of your requests that are consistent with this view of himself”.
(page 74)

“…compliance professionals love commitments that produce inner change. First, that change is not just specific to the situation where it first occurred; it covers a whole range of related situations, too. Second, the effects of the change are lasting. So, once a man has been induced to take action that shifts his self-image to that of, let’s say, a public spirited citizen [or a guru’s disciple], he is likely to be public-spirited in a variety of other circumstances where his compliance may also be desired, and he is likely to continue his public-spirited behavior for as long as his new self-image holds.”
(page 84)

Social proof (social influence) is another extremely strong heuristic: “if everyone else is doing it, I should do it to”

This too can be used unfairly – for example Evangelist Billy Graham has been known to ‘seed’ visits to towns in advance so that his arrival is met an outpouring of thousands of the faithful – apparently spontaneous, but actually highly organised. (p 101)

Positive association is also a powerful, and potentially automatic (see also) decision -shortcut

In one study, men who saw a new-car ad that included a seductive young woman model rated the car as faster, more appealing, more expensive-looking, and better designed than did men who viewed the same ad without the model. Yet when asked later, the men refused to believe that the presence of the young woman had influenced their judgments. [4]

The same kind of, automatic associations, lie behind findings that people leave larger tips if paying by credit card (credit cards associated with big spending, not always with paying back) and that “that when asked to contribute to charity (the United Way), college students were markedly more likely to give money if the room they were in contained MasterCard insignias than if it did not (87 percent verses 33 percent).” (p164). Funnily enough this didn’t hold for people with troubled credit histories!

Cialdini is quite clear that we can’t avoid using these short-cuts – after all they work most of the time – but we must come down hard on those who exploit them

“The pace of modern life demands that we frequently use shortcuts” (p. 234)

“We are likely to use these lone cues when we don’t have the inclination, time, energy, or cognitive resources to undertake a complete analysis of the situation. When we are rushed, stressed, uncertain, indifferent, distracted or fatigue, we tend to focus less on the information available to us. When making decisions under these circumstances, we often revert to the rather primitive but necessary single-piece-of-good-evidence approach.” (p235)

“The real treachery, and what we cannot tolerate, is any attempt to make a profit in a way that threatens the reliability of our shortcuts” (p. 239)

I don’t know how realistic this kind of individual/consumer vigilance is as a strategy, but Cialdini seems to believe that the only alternative is to change the whole pace of modern life

The evidence suggests that the ever-accelerating pace and informational crush of modern life will make this particular form of unthinking compliance [shortcuts] more and more prevalent in the future (introduction, p. x.)

My default assumption used to be that the careless use of decision heuristics probably only applies to unimportant decisions. This took quite a severe knock from Cialdini’s discussion on the social-contagion of suicide [5]. If people can be influenced by publicity about a suicide to kill themselves (and all the evidence is that they are – and social proof is one of Cialdini’s six discussed shortcuts), then all of the decisions we make in life are open to be exploited by irrational factors under the control of others.

Refs below the fold

Continue reading “Influence (by Robert Cialdini)”

when choice is demotivating

Here’s a way to make people buy more of your stuff – give them fewer options. Douglas Coupland called the bewilderment induced by there being too many choices ‘option paralysis’ (‘Generation X’, 1991). Now social psychologists have caught on (‘When choice is demotivating’, 2000, [1]). Offer shoppers a choice of 24 jams and they are less likely to buy a jar than if offered a choice of 6 jams. Offer students a choice of 6 essays, rather than 30 essays, for extra-credit and more will take up the opportunity if there is less choice of essay titles – and, what is more, they write better essays. Students given a similar choice of free chocolates (a restricted choice compared to an extensive choice) made quicker choices (not too suprising) and were happier with the choices they did make once they had made them.

ref

[1] Iyengar, S. S., & Lepper, M. R. 2000. <a href="http://www.columbia.edu/~ss957/whenchoice.html
“>When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79, 995-1006.

advertising influences familiarity induces preference


We probably like to think that we’re too smart to be seduced by such “branding,” but we aren’t. If you ask test participants in a study to explain their preferences in music or art, they’ll come up with some account based on the qualities of the pieces themselves. Yet several studies have demonstrated that “familiarity breeds liking.” If you play snippets of music for people or show them slides of paintings and vary the number of times they hear or see the music and art, on the whole people will rate the familiar things more positively than the unfamiliar ones. The people doing the ratings don’t know that they like one bit of music more than another
because its more familiar. Nonetheless, when products are essentially equivalent, people go with what’s familiar, even if it’s only familiar because they know its name from advertising

Barry Schwartz. ‘The Paradox of Choice’ (2004)

I think the essential point is correct, but there is a sort of sneaking condescension here: All of you people (the ‘test participants’) only like the things you like because you’re familiar with them, not because of any rational or emotional affection for them (that’s just ‘some account’). What’s more – we (the psychologists) have done experiments which show (admittedly only in some circumstances) that familiarity leads to liking; and from this we’re prepared to generalise to all other circumstances you’re involved in. I parody, but I’m sure you see what I mean.

The fact that we tend to like the familiar isn’t too surprising. There’s even a good evolutionary reason for preferring what worked before – if it didn’t kill you last time, why risk doing something else this time? The single most useful thing you can measure to predict what someone will do in the future is not what they want to do, nor is it what they say they’ll probably do, nor what their friends and family will do, but simply what they did last time – such is the power of habit (For more on this see Hack #74 in Mind Hacks).

But the interesting thing about advertising and branding is the process of it making something familiar to us and us taking this as an indication of preference. In other words, we don’t properly take into account that the brand is not familiar to us for any good reason.

Psychologically it’s not too surprising that this should happen. The study [1] which revived the subliminal perception field involved this mere exposure effect. Participants were shown meaningless shapes for time-spans below the perceptual threshold and subsequently they preferred those shapes to other not previously displayed shapes – even though they had not consciously perceived either set of shapes before.

However, is there any evidence that this kind of familiarity effect can be shown to compete with, or even over-ride, actual good reasons for liking or disliking a brand? Perhaps people are happy to use a fairly arbitrary guideline (familiarity) for unimportant decisions, or decisions where the choices are all pretty good, but when more is at stake familiarity is relegated down the table of influencing factors?

Ref

[1] Kunst-Wilson WR, Zajonc RB (1980). Affective discrimination of stimuli that cannot be recognized. Science, 207(4430):557-8.

an appropriate error

Anna Airoldi, the translator of Mind Hacks into Italian has noticed a fantastic error in the published book. She writes

(170) 1st paragraph of “How it works”;
I’m not entirely sure this is a real typo, considering the topic discussed in the paragraph, but “conservations” shouldn’t just be “conversations”?

She’s absolutely right – it should be ‘conversations’ not ‘conservations’. But although it is an error, in this case it is an appropriate error, because it appears in Hack #52 ‘Robust Processing Using Parallelism’ which discusses how we can read errorful or ambiguous sentences using multiple interacting levels of information to construct meaning. Normally this is a good thing, but it appears that in this particular instance the meaning was so obvious that our normally diligent editing process didn’t spot the mistake (my mistake in origin, incidentally)!

Ask philosophers about the mind

small_thinker.jpgAsk Philosophers is a site where anyone can pose a question to be answered by some of the leading lights in world philosophy, including specialists in the philosophy of mind.

Scientists are often disappointingly dismissive of philosophy, usually without a good understanding of the breadth and depth of the modern discipline.

Philosophers are increasingly taking the role of ‘theoretical scientists’ – by understanding the scientific data in great detail and applying the tools of conceptual analysis to make sure current theories are conceptually water tight (or highlighting areas where they are not).

This is particularly important in the cognitive and clinical sciences because many philosophical problems are encountered on a day-to-day basis.

For example, the mind-body problem – that tries to understand the relationship between physical biological processes and thought – comes into stark relief when a clinician encounters a patient with brain injury.

Similarly, the age-old philosophical problems of understanding belief and knowledge become particularly important when the medical community have to define what it is to have a delusion – something that is usually considered a form of ‘damaged’ belief.

In the Ask Philosophers philosophy of mind section there are already some fantastic questions and answers online.

One person asks if a person who is given medication to make her forget a potentially terrifying surgical experience was ever actually afraid, another asks about whether it is possible to think about the thought you are thinking.

Anyone can pitch a question, so if you have any burning queries, philosophy’s finest are waiting for your challenge.

Link to Ask Philosophers Mind section.

Confusing symbols and reality

lego_block.jpgThe latest Scientific American discusses the development of symbolic thinking in children, in an article by child psychologist Judy DeLoache.

Professor DeLoache was intrigued as to why young children sometimes try and pick up or use items in pictures, or fail to make sense of miniature objects – an error she calls ‘symbol confusion’:

Pictures are not the only source of symbol confusion for very young children. For many years, my colleagues and students and I watched toddlers come into the lab and try to sit down on the tiny chair from the scale model – much to the astonishment of all present. At home, Uttal and Rosengren had also observed their own daughters trying to lie down in a doll’s bed or get into a miniature toy car. Intrigued by these remarkable behaviors that were not mentioned in any of the scientific literature we examined, we decided to study them.

DeLoache thinks that ‘scale errors’ involve a failure of dual representation: children cannot maintain the distinction between a symbol and what it refers to.

To help children solve this problem, the researchers told the children they had a ‘shrinking machine’, that replaced toys with miniature versions.

When children were told that the toy had been shrunk, they no longer needed to represent it as a symbol of another object, they simply assumed it was the same object, and no longer made ‘symbol confusion’ errors.

This work has had important legal implications, as young children giving evidence in cases of abuse are often given dolls – symbolic representations of themselves – and asked to describe or point out what happened.

Knowing at what age children are likely to make best use of this technique might be essential in obtaining reliable evidence.

Link to Scientific American article ‘Mindful of Symbols’.

Execution rests on IQ test

daryl_atkins.jpgThe BBC are reporting that convicted murderer Daryl Atkins may be executed by the state of Virginia, based on a recent IQ test where he scored 74, four points above the legal definition of retardation, which had previously excluded him from the death penalty.

When first tested in 1998, his IQ measured 59, well below the 70 points cut-off level. The cut-off of 70 is significant, owing to design of the IQ test.

Intelligence shows a specific sort of distribution in the population, and follows a common pattern, known as a normal distribution.

Rather than design a test with arbitrary figures, modern IQ tests have been created with specific statistical properties to make them easier to interpret: the average IQ is 100, and the standard deviation (the average variation from the average) is 15. Click here to see a graph of this in a pop-up window.

The cut-off of 70 is two standard-deviations below the average. It is known that 95% of the population will score within two standard deviations on either side of the average. This makes the legal definition of retardation, at least in Virginia, as having an IQ score in the bottom 2.5% of the population.

There is no easy explanation as to why someone’s IQ score might rise during a 7-year period. Prosecutors are arguing that he ‘pulled-punches’ on the original test, the defence argue that his interaction with lawyers has raised his IQ – although many factors, such as distraction, the skill and reliability of the tester, and familiarity with the tests can affect the score.

Interestingly, the prosecution are arguing that his IQ is actually 76, 2 points higher than the defence claim. Why quibble over two points?

Possibly because of another statistical property of IQ. It has a standard error of measurement (the average error in assessing the presumed true score) of 5 points.

Even taking into account a standard error of measurement of 5 points, a score of 76 would definitely be above the level of retardation – making Atkins eligible for the death penalty, whereas a score of 74 is still ambiguous.

Interestingly, it was a supreme court decision, based on Atkins case, that first made it illegal to execute convicts considered legally retarded.

Statistical properties aside, the whole concept of IQ itself is still controversial among some psychologists, and was most notably criticised in Stephen Jay Gould’s book The Mismeasure of Man.

Link to BBC News story.
Link to story from Daily Telegraph.

Understanding ‘Aha!’

insight.jpgTo this day, psychologists understand little about ‘insight’ – that Eureka moment when a long-sought answer suddenly jumps to mind. These “Aha!” experiences range from the trivial – suddenly solving a crossword clue, to the profound – like Kary Mullis’s Nobel-Prize-winning invention of the polymerase chain reaction, the basis of which occurred to him while driving home one day.

According to Edward Bowden and colleagues writing in the latest issue of Trends in Cognitive Sciences, insight is achieved via the right-hemisphere (cf. Hack #69 ) which “engages in relatively coarse semantic coding, and is therefore more likely to maintain diffuse activation of alternative meanings, distant associations and solution-relevant concepts”. Unfortunately, by its nature this diffuse activation is often weak and beyond conscious reach of the struggling thinker.

In support of this they’ve shown, for example, that when people are presented with the solution to a problem they couldn’t solve, they’re quicker at reading this solution aloud when it’s presented to their left visual field (right hemisphere) than to their right visual field (left hemisphere). This suggests the right hemisphere had been closer to reaching the solution than the left. Moreover, brain scans of solutions reached by insight revealed more activity in the anterior superior temporal sulcus of the right hemisphere, than did solutions not reached by insight. So, perhaps you should do tomorrow’s Suduko while looking out of the left corner of your eyes!

Continue reading “Understanding ‘Aha!’”

The cognitive basis of good and evil

Michael Shermer, who writes the Skeptic column for Scientific American, and who is normally right on the mark has this to say about the concepts of Good and Evil:


‘The myth of good and evil is grounded in Christian theology and the belief that such forces exist independently of their carriers,’

You can read the full article – byline ‘It is too simple to blame evil people for horrifying acts of terror’ – <a href="http://www.godlessgeeks.com/LINKS/SomethingEvil.htm
“>here. I don’t want to disagree with Shermer’s conclusions, but just nit-pick on this specific point. In effect, I think i totally disagree with the above statement – let’s call it the ‘Cultural Invention of Evil Theory’. Rather, and readers of Mind Hacks might have guessed, I believe seeing Good and Evil in the world is the result of a basis cognitive process which we we all share.

The myth of good and evil arises from a psychological bias we all have, and which in the social psychology biz is called the ‘the fundamental attribution error’. This is simply that when looking at other people’s behaviour we tend to over-emphasise inherent characteristics (eg “he didn’t do the washing up because he’s lazy”), while when looking at our own we tend to over-emphasise situational variables (“i didn’t do the washing up because i had to go to work and do lots of marking”). Why this exists is probably because although it is often wrong, it is an adaptive way to think about the causal world. When trying to understand your own behaviour it is easiest to look at the things that vary (ie the situation) and try and control that, but when looking at other people’s behaviour the major variable is which other person you are looking at. It doesn’t make it right, but it is just easier to see other people as Good, or Evil, or Lazy, or Clever than it is to take full account of the complexity of both their situation and their personality.

Surely that is sufficient reason to explain the persistence of notions of good and evil, and also helps avoid the problem of how non-Christian cultures come also to use the concepts. The cultural background just flavours a universal, a universal which arises from the information-mechanics of our cognitive apparatus.

BBC Frontiers on the psychology of risk

climber_face.jpgBBC Radio 4’s science show Frontiers goes for a cognitive science two-in-a-row as it follows-up last week’s programme on neuroprosethics with an analysis of the psychology of risk-taking, sensation seeking and risk-based reasoning.

Psychologist Marvin Zuckerman tackles evolutionary explanations for individual differences in risk-taking, and discusses the personality attributes and biological influences of sensation seeking people.

The programme also interviews people who are typically defined as high sensation seekers about their motivations and experiences, such as author and adventure climber Mick Fowler.

Link to Frontiers web page on ‘Risk and Risk Taking’.
Link to realaudio archive of programme.

‘A Genius Explains’

There was an interesting piece in last weekend’s Guardian (A Genius Explains) about a high-functioning autistic who is also a savant (i.e. he’s got amazingly intellectual abilities – he can recall pi to 22,514 decimal places for example). Autistic savants are more common than non-autistic savants, but usually they aren’t able to quite so lucidly explain how they manage to do the things they do.

The article left me curious, and a little jealous (“It’s mental imagery”, he said “It’s like maths without having to think.”) and makes me feel like we’re in for some interesting times ahead as research into savantism, synthesia, developmental cognitive neuroscience and mental imagery converges.

Abstract structure need not be based on language

Grammar-impaired patients with problems in parsing sentences can parse sums. This weighs against the argument that language underpins our capacity for abstract thought: these individuals have problems with telling “dog bites man” from “man bites dog” but no similar problems with 112-45 vs 45-112.

Aphasia and other language problems stemming from brain damage can indeed lead to calculation problems, but this study suggests that they are not necessarily intertwined. As the authors put it, the performance of their subjects is “incompatible with a claim that mathematical expressions are translated into a language format to gain access to syntactic mechanisms specialized for language.”

Continue reading “Abstract structure need not be based on language”