9/24/2012
Week 5 Questions
CCDP
I’m a bit confused by the distinction between commitment-preserving
inferences and entitlement preserving inferences, as spelled out in CCDP
pp. 12-13. It sounds like you are saying that commitment-preserving
inferences from A to B are ones in which the inference from A to B is
non-defeasible, and entitlement preserving inferences from A to B are those
in which the inference is defeasible. Thus one is not committed to there
being good weather upon seeing a red sky at night because the inference
could be defeated by the observation of a barometer, for instance. Is the
inference from X being a zebra to X having stripes commitment-preserving or
entitlement-preserving? The inference seems defeasible, as in the case
where X is albino, so I’m tempted to take it to be entitlement preserving.
If that’s right, commitment-preserving inferences sound an awful lot like
formal deductive inferences, especially as your example on p. 13 seems to
run:
(∀x)((x is a yacht . x is in the harbor) ⊃ (x is a sloop))
John B is a yacht . John B is in the harbor
Therefore, John B is a sloop
I don’t see why we need to take this to be a *materially *good inference at
all, rather than a formally good one. Furthermore, if the inference from X
being a zebra to X having stripes is simply entitlement-preserving, I fail
to see how we commit ourselves to anything we haven’t made explicit in our
reasoning process, except for, possibly, the purely analytic statements
which follow from what we’ve stated. On the other hand, if the inference
from X being a zebra to X having stripes is consequence-preserving, it
seems like it’s going to be quite difficult to draw a hard line between
commitment-preserving and entitlement-preserving inferences.
*MIE*
In p. 185, you mention that linguistic scorekeeping practice is “doubly
perspectival”, in that “What C is committed to according to A may be quite
different, not only from what D is committed to according to A, but also
from what C is committed to according to B.” I was wondering how essential
this latter perspectival aspect is. If A and B were to “make explicit” what
they attributed to C with the use of scare-quotes and de re ascriptions –
thus enabling them to make distinctions between C’s doxastic commitments
andsubstitutional commitments-- would they eventually always come into
agreement concerning C’s commitments? If the answer is no – is it possible
for A and B to both be, in some sense, “right” about C here, or will there
always be a fact of the matter?
*BSD*
`I’m a bit confused about the second component on the argument on pg 82:
Only something that can *talk* can do that, since one cannot *ignore* what
one cannot attend to (a PP-necessity claim), and for many complex
relational properties, only those with access to the combinational
productive resources of a *language* can pick them out and respond
differentially to them. No non-linguistic creature can be concerned with
fridgeons and old-Provo eye colors.
I was thinking that the difficulty you had in mind was no the ability to
ignore certain complex relational properties when they were irrelevant, but
the ability to decide *what *to ignore. How does having the ability to
deploy a discursive vocabulary aid us in this decision? Is it primarily
normative considerations?
Billy Eck
*MIE**, Ch. 3*:
Imagine an objector who claims that your deontic scorekeeping story of
discursive practice is insufficient for conferring meanings on to
assertions. She asks us to imagine two autonomous discursive practices
that (currently) share the same exact set of possible assertions (the
sentences *p*, *q*, *r*, etc.). In both ADPs, she stipulates, a speaker is
entitled to assert a sentence in the same circumstances. Both ADPs warrant
the same set of material inferences from one sentence to another. In
short, the scorekeeping is officiated in the same way in both.
In your story, she claims, the assertion of any sentence in either ADP must
be thought of as expressing the same content as in the other. But she
thinks that they don’t have to. For in ADP1, *p* means “There’s a rabbit!”
and *q* means “That’s blue!”, while in ADP2, *p *means “There are some
undetached rabbit parts!” and *q* means “That’s grue!” Your story, she
therefore thinks, is insufficient to confer meanings on assertions.
BB: Appendix II to MIE 6 on ‘gavagai’
Where has she gone wrong in her reasoning?
*BSD**, Ch. 3*:
This is more of a drawn out line of thought than a question, as (I think)
I answered what was to be my initial question. I'd like to send it anyway,
as I think bringing it up in class might shed light on the AI debate before
*BSD*. I'm also interested in whether you think I made some error in the
discussion.
In the section on arguments against AI functionalism, you paraphrase a
locus of Hubert Dreyfus’s criticisms of classical symbolic AI and translate
it into MUR-talk to make it address an “algorithmic pragmatic elaboration”
conception of AI (78, 75).
Dreyfus’s criticism is, roughly, that classical symbolic AI requires that
the ordinary practical skills that are necessary for our understanding and
going about in the world be codifiable in explicit rules within a program,
and that that reveals an incoherent intellectualism lurking in the
background that seeks to explain all knowing-how in terms of knowing-that. You
explain that “the corresponding argument against the substantive practical
algorithmic decomposability version of AI would have to offer reasons for
pessimism about the possibility of algorithmically resolving essential
discursive knowing- (or believing-)*that* without remainder into
non-discursive forms of knowing-*how*” (78).
I’m interested in what such an argument could look like and whether or not
it could count as having the form of argument that you later designate as
conclusive against the “pragmatic” conception of AI, namely, that “some
aspect exhibited by all autonomous discursive practices…is not
algorithmically decomposable into non-discursive practices-or-abilities”
(79).
Here’s a familiar suggestion from Gilbert Ryle’s classic chapter “Knowing
How and Knowing That”:
The consideration of propositions is itself an operation the execution of
which can be more or less intelligent, less or more stupid. But if, for any
operation to be intelligently executed, a prior theoretical operation had
first to be performed and performed intelligently, it would be a logical
impossibility for anyone ever to break into the circle.
p.30
Ryle takes this line of thought as conclusive against what he sees as an
intellectualism. There is always a knowing-how behind a knowing-that, not
vice versa. But does it have any bearing on AI? It says that
understanding a bit language will require some skill that is not itself the
understanding some other bit of language. But is reacting to the syntax of
some line of code understanding a bit of language? I’d presume no. So
Ryle’s point doesn’t seem to have much bearing on the AI debate. Dreyfus
seems to miss this when he takes his considerations to warrant pessimism
about AI.
But perhaps we can see why such a view is tempting, by appreciating your
claim that computer languages are in principle pragmatic metavocabulary for
some ADP. Because they play this role, it is easy to treat computers as
listening to the coding language so as to grasp what to do. But that’s the
wrong model. The computer language is not just a metavocabulary but also
constitutes the set of stimuli to which the computer, *qua* transducing
automaton, responds without having to understand what the syntax of the
stimuli might be used to express.
BB: Dennett on stupider and stupider homunculi.
Chuck Goldhaber
*Between Saying and Doing #1*
On page 46 of BSD, Brandom falls into a dilemma concerning how to
characterize his LX diagram. As Brandom has it, Vconditionals is VP-suff
forPinferring. Either inferring is understood on the act-object model or
not. Assuming that inferring is understood on the act-object model, any
case of inferring contains an act of inferring something from something,
and, independent from the act, what was inferred from what. On this model,
conditionals seem to state a relation between the objects of the act, not
on the acts themselves. As Brandom himself adds in a footnote to the LX
diagram, conditionals assert “explicitly that one thing that can be said
follows from another thing that can be said”, not that “the act of
inferring is permissible”. This makes it seem as if conditionals target
explication on relations among the objects of inference. Since the objects
of inference must be vocabulary expressing propositional contents,
Vconditionals and Pinferrings stand in a VV-suff relation, and
“Pinferrings” must then be written as “P*V*inferrings”. In the main text,
Brandom also claims that “what the conditional says explicitly is what one
endorsed implicitly by doing what one did”. This characterization of what
the conditionals explicate differs from the one in the footnote in that it
seems to make essential reference to a *doing*. However, this is of no
help to Brandom since the conditional makes explicit an *endorsement*, not
the ‘doing’ it was caught up in. On the current model, since endorsements
admit of an act and an object, the conditional makes explicit the relation
of the objects of the act, i.e., the relation holding between propositional
contents.
I can imagine a response according to which Vconditionals explicate only
those relationships between propositional contents that have been (in some
way) involved in the act of inferring; for how else are conditionals to
know which propositional contents stand in the desired relationship. This
would be in effect to throw off the act-object model as a fictional
idealization. Propositional contents are unintelligible apart from the
acts they are caught up in. But, this rejection requires that conditionals
themselves can be made sense of only by asserting one. So, “Vconditional”
must be written as “*P*Vconditional”. So, PVconditional now stand in a
PVPV-suff relationship with PVinferrings. Either take the act-object
distinction seriously with respect to inferrings, as Brandom seems to do,
and be left with a VV-suff relation, or else treat it as a fiction, but be
left with a PVPV-suff relation.
*Between Saying and Doing #2*
Brandom draws a distinction between algorithmic elaboration and practical
elaboration by training. That is a fine distinction, but it leaves out the
most intriguing aspect of most of our actions. We can act effectively and
in the appropriate manner in the face of novel and unexpected circumstances
for which we have not been trained. Brandom is perhaps right in
characterizing training as a sort of ‘feedback loop’ of perception,
responsive performance, and perception of the results of the performance. But,
a single run through Brandom’s loop would not adequately capture what is
unique about the sort of case I have in mind. This is because there was no
training for the act in those circumstances, the particular act was not
itself part of a training schedule, and that particular type of act may
never be performed again. Brandom’s loop would not show how such acts
differ in kind from practically elaborated ones. When Rossi passes Lorenzo
on turn 3 at Catalunyna (choose an example that works for you), he may have
not trained for the way he had to do it, nor was he training while he
passed Lorenzo, and indeed, he may never perform that way again. It is
obvious that no addition of pragmatically elaborated sub-performances
captures what is unique about such acts, since the question would be left
over of why the sub-performances were added up in *those* circumstances.
The apt characterization of such performance should, in the end, be given
in terms of the agent’s or performance’s* goodness *with respect to what
the agent is trained to do or the performance’s kind. The appropriate
characterization of Rossi’s action is the attribution of an *accolade* to
Rossi, “Wow, Rossi is great (as a GP racer)!”, or to the performance
itself, “What a close-off!” Because no training was involved in the
performance under those circumstances, we have nowhere else to look for an
account of the act except the agent or performance itself. Similar points
seem to hold for interpretation, the difference being that we are all good
at it, so no accolades are given.
*Making It Explicit*
Last time I asked about how malapropisms apply to a semantically backed
pragmatics. I suggested it presented a counter-example to your view. In
your response, you noted that one also must make pragmatic inferences in
order to understand someone, though no general theory could be given about
most pragmatic inferences we would have to make. In response, I want to
suggest that *no* inferring is required in understanding another person’s
words. I can set it out in a double dilemma. The inference is either an
act or not. If the inference is an act, it can be made explicitly or
implicitly. Of course, we do not need to explicitly infer when we
interpret a person’s words; if that were so, we would not need Brandom’s
book to tell us that is how we do it. I would know that is what I did in
the way I know other things about myself. So, perhaps, as Brandom seems to
suggest, we may infer implicitly in attributing commitments and
entitlements to others. But how should we characterize implicit
inference? Commitments
and entitlements are already propositional contents. And, in order for me
to infer one content from another, I must understand the propositional
contents involved in the inference. But, “I understand *p* implicitly”
makes no sense, at least when “understand” is considered an achievement
verb, as we are currently considering it. This might suggest that
inference is not to be considered as an act that goes on in interpreting
someone. Perhaps we are only *attributing *commitments and entitlements
that *already* have some requisite pragmatic inferential structure. But in
order to make room for malapropisms, we cannot rely on a structure already
in place. Therefore, it seems that in interpreting another person we need
not infer at all. (Much here depends on how we cash out ‘explict’,
‘implict’, and their adverbial counterparts.)
Another thing to tackle is whether we *attribute* and *take*: in what sense
are they doings?, what does it mean to do one of them?, are they merely
innocent philosophical idealizations?
*Conceptual Content and Discursive Practice*
Brandom claims that the ‘six consequential relations among commitments and
entitlements…confer inferentially articulated, and so genuinely conceptual
content on the expressions, performances, and statuses that have the right
kind of scorekeeping significances in those practices.” What does ‘confer’
mean? The Queen can confer knighthood onto someone, and in doing so the
person undergoes a shift in state from not being a knight to being a knight.
An analogous shift in state cannot be found in Brandom’s system. Commitment
and entitlements are commitments and entitlements to propositional
contents, but to be a propositional content essentially involves the
possibility of its expression in language. So, expressions are already
possessed with propositional content. The same can be shown to hold, I
think, for ‘performances’ and ‘statuses’. If this is so, what does
‘conferral’ mean exactly? This question is very similar to one I asked two
weeks ago.
Shivam Patel
BSD 3 introduces a novel kind of PP-sufficiency relation to be laid
alongside the (by then familiar) one on which elements of one practice must
be algorithmically re-arranged (in uniform and antecedently specifiable
ways of the kind an automaton can carry out) in order to yield another
practice. The new relation is called "practical elaboration by training".
Its job in BSD 3 is to provide some relief to those who fear that if
sapience turns out not to be algorithmically decomposable into primitive
(clearly non-linguistic) abilities, then it becomes wholly mysterious. This
worry is unwarranted, BSD 3 says, because there is this other,
naturalistically equally respectable, form of elaboration/decomposition.
My (probably simple) question concerns the relationship between these two
relations between practices. If two practices are related by the
training-relation (to speak in-officially), is there not generally good
reason to think that the algorithm-relation also holds? Sure, we have no
idea how it is properly spelled out. As BSD 3 affirms, the specifics "vary
wildly from case to case, and depend heavily on biological, sociological,
historical, psychological and biographical contingencies" (BSD p. 85). But
this does not show that there is not also a way of decomposing the target
practice algorithmically into the base practice. And it also does not show
anything about the a-prioiriknowability of _whether_ there is such a way.
Often, in philosophy, this is all we need, isn't it?
Matthias Kisselbach
MIE, chapter 3:
I would like to ask a clarifying question concerning the conceptual relation between the assertional/doxastic sort of commitment and the corresponding entitlement: Is the former concept of commitment can be understood solely in terms of the latter concept of entitlement?; In other words, is the former concept conceptually reducible to the latter?
On the one hand, Brandom explicitly denies a familiar way of achieving this conceptual reduction. He explicitly disapproves the traditional definition of the commitment (i.e., responsibility) in terms of the entitlement (i.e., authority) plus the formal negation, and vice versa (p. 160). (In his project, the commitment and the entitlement are rather used to define the material sort of negation, or incompatibility).
On the other hand, however, it appears (at least to me) that Brandom suggests another way to understand the commitment in terms of the entitlement all the way down. To begin with, according to his analysis (pp. 172-80), the pragmatic significance of assertion consists of two factors: (1) undertaking the commitment (i.e., responsibility) to show entitlement to it if challenged in an entitled way; (2) Unless this commitment is breached, entitling (i.e., authorizing) audience to the same assertion. Then, Brandom goes on to explain the commitment undertaken in (1) in terms of the entitlement that the audience can internally sanction the breaching subject by depriving her assertion of the power to entitling the audience to it. Certainly, the assertional/doxastic commitment essentially has its propositional content, and that content is explained partly in terms of the commitment-preserving inferential role it plays, that is, in terms of the other commitments it entails and is entailed by. However, given the basic pragmatist analysis of assertion above, each of these other commitments can again be understood in terms of the relevant entitlement all the way down. Clearly this story does not have a form of straightforward definition like the traditional one, but it still seems to me to be a sort of reduction of commitment to entitlement.