mark my words

athiest father preying for a secular future

Funny quote about Jordan Peterson

If psychology has always been the smorgasbord of soft sciences, Peterson’s brand of profundity is the sprawling, all-you-can-eat Mandarin buffet – a medley of undercooked ideas warmed under the heat lamp of his own faintly flickering intellect.

January 31, 2018 Uncategorized

Browsing too fast.

I thought this was funny.  I was doing searches and modifying the URL search parameters when I got this message:

This kind of allows me to do super searches, but I guess they don’t like it.


September 21, 2017 Uncategorized

Funny AI Book Recommendation

On Goodreads:

Because I liked a book about Bill Clinton published in the 90s, I might like an advanced poker book?!  Funny thing is I’m pretty sure I’ve read that book too, so it is a good suggestion.

June 9, 2017 Uncategorized

LSTM Cells

I haven’t quite got to LSTM in my book yet, but they were talkinga bout them in the TensorFlow Dev Summit.
I found this neato resource

The LSTM’s really remind me of logic gates.  Especially flip-flop, where they put together some NOR gates to make memory.

Using a series of LSTM’s might be good for parsing the entire bidding auction.  The bidding is kind of like a sentence.

February 17, 2017 AI Bridge Project

TensorFlow Summit

This tool is pretty neat.  It lets you visualize your data.

Data Map

It would be neat if I could map different bridge hands into 3D space for display

February 16, 2017 AI Bridge Project

Neat Images

I thought this article had some neat images!



AI Bridge Project

TenserFlow 1.0

Looks like there is a new release of TensorFlow.  I see there is an 8 hour video!  Looks like good watching, perhaps this weekend?

AI Bridge Project

Well I guess someone beat me to the punch

I found an amazing journal entry explaining exactly what I wanted to do!  They even did a better job than I was thinking.

They got around the ‘double dummy’ by dealing 5 hands.  A pretty good idea.  They also used gradient decent based on the score.  I wonder if they are calculating double dummy throughout their learning loop.

A couple of things that could be added, they don’t seem to take into consideration opponent’s bidding nor vulnerability.  They also didn’t partition their DNN based on suits, which I think would give the model a big head start.

But really cool work.


AI Bridge Project

Deep Learning

This Deep learning book is turning out to be very technical.  But it is terrific!  It explains in technical detail exactly what each function does and the reasoning behind it.

My purpose of learning this deep learning is the develop a bridge bidding system using raw data.  Essentially, don’t worry about human conventions at all.  Also, don’t tell the computer it has a partner, let it just find the best score based off the information it has.

This insight has allowed me to think of other problems.  Eg, When solving double dummy hands, the max number of tricks you make, might be different than the goal of number of tricks you need.  Thankfully the deal program can tell you how many tricks you can take via a greedy line, and how many tricks you can take via a conservative line.

I think the ideas of bidding conventions might be a bit complex.  It might be best to have a computer actually bid what they think they can make.  I obviously can’t create machine learning for each bidding sequence.  So I think the computer should only take the last bid into consideration.  What happens if the opponents are playing a conventional bid, then it would be stupid to try to infer anything other than that could be the last bid they make!  I imagine it will emerge that all the computer agents have a huge advantage to bid what they actually have.  Both for descriptive reasons and for pre-emptive reasons.

A shortcut I could use is once I have a deal, do learning with each hand.  But this might really bias the data so I think I won’t do that. (Eg. 1 deal = 4 learning iterations)

Besides the book talking about softmax functions,which I think I can use to assign a probability of making each contract times the ‘expected value’, the scored reward of actually making the contract.

The model that seems to be pretty universal is a DNN with linear regression on the inputs, a convolution network on each suit, and a couple of relu layers.  Then a softmax expected value.  It would be neat to use gradient descent on the calculated score of the different bids.  But I’ll likely just use the default L2.

So that is my best beginner approach.

February 14, 2017 AI Bridge Project

Next on the list

A neat primer!



February 3, 2017 AI Bridge Project