2 April 2001
I should've posted these thoughts on April 1, in honor of the day, but
considering I didn't =have= these thoughts until this morning, I =suppose=
I won't lie by backdating this entry.... though I do backdate my checks
(which, I know, has no effect on late fees. It's a totally pointless
lie.)
In any case, I wanted to talk about truth and lies.
First of all, I want to concur with Keats that "Beauty is Truth, Truth
Beauty -- that is all Ye know on Earth, and all ye need to know." That is
not to say that lies are necessarily ugly, as many lies are very pretty
indeed (who really cares for an ugly lie? Well, postmodernists, but
that's a different issue. Still, believing in an ugly lie is one of the
more futile things I have ever heard of.)
I'll get back to that sentiment, but I want to revisit something I wrote
in January 1998 - about lying techniques. Both involved telling the
truth, but the first involved telling =both= the truth and the exact
opposite alternatively (this works for yes/no questions), and the other
involved taking all questions literally (this is the "it depends on the
meaning of the word 'is'" kind of lie). Basically, one of the more
effective ways to lie is to tell the truth in a way that noone believes
you.
Examples of this:
a) Tell somebody your big secret/confession/surprise on April
Fools' Day. This has limited usefulness as one can use it only one day of
the year, and you need to have as your victim someone who is on guard
against April Fools' Day stories.
b) Tell the truth in a sarcastic tone. Again, you have to pick
someone who has sense enough to recognize sarcasm. A sneer on one's face
is also helpful.
Actually, that second technique of sneering, and voice tone, can come in
real handy for many people in telling lies. If one can lie nonverbally,
one has got it made. That is really the only reason I can lie so well
(again, to remind some people - I don't lie often - as to why, you shall
see below). However, some reactions are hard to control, and so some
people prefer to damp all nonverbal cues when telling lies, though this
technique is simply the best way to convey no information, as opposed to
telling a lie (this is why good poker players have a "poker face". I've
found non-poker faces work only against newbies, who can easily be
suckered into believing one is bluffing (or not) by one's facial
expressions.)
Most of the time, one is not really wanting to lie but to conceal
information. First of all, if you can make sure noone knows you =have=
info to conceal, you're best off this way. For example, I got to go to
Wendy's with my Ma or Dad after my math class on Saturday mornings when I
was in middle school. I don't think Amy & Carey never caught on to this,
as I was careful to make sure all evidence was gone by the time we got
home. They never even knew there was something to ask about. I'd still
be able to lie about it, but the issue never came up because they didn't
know I had information that they would want to know. This is a
secret-keeping technique: don't let other people know you have secrets!
Back to frequency of lying -- something many children do not understand,
though it seems many catch on by adulthood -- if you lie alot, noone will
believe anything that comes out of your mouth (if you're really slick,
you'll use this to your advantage by telling the truth, which you know
noone will believe). If you lie too much, the power of your lies
diminishes. However, if you never lie, you never get some of the benefits
of lying (such as avoidance of punishment, or impressing strangers). How
to balance this?
In real life, this is a sticky problem, because the payoffs don't
necessarily have a numerical value, and some people are more likely to
believe or disbelieve a lie, depending on what it costs them to be
skeptical (so a tired parent may believe a lie about cleaning your room,
but one who has got a neat streak may check the room right away). Still,
let me consider a model problem where we can see the benefits
directly. We're going to play a super-simplified "poker" game:
From shuffled deck of four cards (jack, queen, king, and ace), the first
player pulls out a card and looks at it. Then the first player lays down
a bet of one dollar (or not). Then the second player decides whether or
not to call the bet. If the second player does not call the bet, the
second player gives the first player a dollar. If the second player calls
the bet, and it turns out the first player had drawn an ace, then the
second player pays the first player 3 dollars. If the first player had
=not= drawn an ace, the first player has to pay the second player 2
dollars.
This is a game in which the first player has complete information and when
deciding to bluff =knows= they are bluffing (this is a problem in real
poker, because some people consider themselves bluffing even when they
hold a strong hand... they simply do not know the probabilities of the
game. For example, in five card stud, two pair, even low, is a great
hand.) So, the question is, how often should the first player
bluff? Well, let's consider the worst-case scenario: the second player
=always= calls a bet. That means every time the first player bluffs, they
lose 2 dollars, and everytime they don't bluff, they win 3 dollars if they
got an ace and nothing if they didn't. So if p is the frequency at which
the first player bluffs, then the expected winnings of the first player
is: 3/4*p*(-2) + 1/4*3 = 3/4 - 3/2*p. Obviously, if the second player
=always= calls the bet, the first player should never bluff. However,
every game, the second player has the expectation of -3/4, which =can't=
be optimal for the second player. Likewise, if the second player =never=
calls the bet, they are guaranteed to lose 1 dollar on every round.
So we need to optimize for =both= the first and the second players. Let
p=frequency of first-player bluffing, and q = frequency of 2nd player
calling a bet. Then the payoff function for the first player (this is a
zero-sum game, so the second player's payoff is simply the negative of the
first player's) is calculated thusly:
bit from 1st player getting ace, 2nd player call bet: (1/4*q)*3
bit from 1st player getting ace, 2nd player =not= call bet: (1/4*(1-q))*1
bit from 1st player =not= getting ace, bluffs, 2nd player calls bet:
(3/4*p*q)*(-2)
bit from 1st player =not= getting ace, bluffs, 2nd player not call bet:
(3/4*p*(1-q))*1
If one expands the whole thing, you get the expected value for the first
player: 1/2q + 1/4 - 9/4pq +3/4p
For each given propensity of the second player to call (q), there is a p
which will maximize this amount (and such p will be either 0 or 1, for
this equation is linear in p) -- so the max value will be either:
1/2q + 1/4 or 1 - 7/4 q. The only q for which there is no disparity
in these two values is q=1/3. Likewise, for any given bluffing propensity
p, there is a q which =minimizes= the value (because the second player
wishes to minimize the first player's take). Again, q will be either 0 or
1 for this minimum value: 1/4 + 3/4p or 3/4 - 3/2 p. The only p for
which these two values have no difference is p = 2/9. So the minimax
point is achieved when the first player bluffs with a frequency of 2/9,
and the second player calls the bet with a frequency of 1/3. This results
in a return of 5/12 for the first player (yes, this is an unfair game,
stacked against the second player).
But, as we saw above, there =are= situations in which the first player can
win more, or the second can lose less. The point is that things are in
balance at the point which we found. If the first player bluffs more
often than 2/9 of the time, then the second player =always= calls the bet,
and the second player ends up losing less money (and, in a few cases, will
=win= money). If the first player bluffs less, the first player doesn't
make as much money. If the second player calls the bet more often than
1/3, the first player can stop bluffing entirely and win more money in the
long run. And so on, and so forth. The solution we found above was a
minimax solution: the first player should think of the situation in which
the second player plays with an optimal strategy (minimizing the return),
and try to maximize their return under this situation. The second player,
likewise, must consider that the first player will bluff an optimal amount
whatever strategy they (the 2nd player chooses), and must try to minimize
the return under this situation. Simply put, this is the result which one
can expect if both players are completely rational beings, able to react
immediately to the way others play.
However, luckily for many of us who are =not= totally skillful poker
players (or liars), people aren't completely rational, and often one finds
that one is far from a minimax solution. For example, there's this really
mean 7 card stud variant I like to play called Baseball. It has =8= wild
cards, and people have the opportunity to "buy" more cards. This game
splits the pot between the high hand, and the high spade in the hole
(there are three face-down cards (aka cards in the hole)). If one is not
holding =at least= a full house, one is not in a good situation. This
game has 8! 8! wild cards! That screws with probabilities like nobody's
business.
Well, people new to the game of poker have just learned the order of the
hands, and, if one is even sneakier, one has shown them what the
probabilities for the hands are in regular 7-card stud. So, for example,
one newbie may be getting all excited over having a flush (with two wild
cards). They decide to stay in it all the way. I, on the other hand,
have four-of-a-kind (also with two wild cards). One does =not= bluff in a
game like Baseball, esp. with newbies, because they think they have
fabulous hands and will stay in til the end. This is the case of where a
player =always= calls a bet (you can be sure =someone= will stick with you
to the end, because they think their "high" hand is still really high with
that many wild cards in the game), so one should never bluff. On the
other end, we have five card stud, no wild cards, high hand only. Play
with the same newbies, =without= them knowing the probabilities of hands,
and they're almost guaranteed to fold. This is a case where one should
always bluff, for the probability of getting =at least= a pair is less
than half. If one has one pair, stay in; heck, if you've got a decent
high card, stay in. You can even do this if a newbie has a pair showing
(real stud has 4 up cards -- so only one card is hidden), esp. if it's a
low pair. Many people won't stay in even that long. Still, I wouldn't
play 5-card stud, because people tend to find it boring. One can find
similar games, which are more interesting, and just as likely to scare off
newbies.
The problem is, of course, when one runs into a skilled poker
player. That's the time to cash in one's chips, and watch them at work.
In any case, I think this is long enough for an entry, so I'll have to
talk about truth some other time.