mark my words

athiest father preying for a secular future

Browsing too fast.

I thought this was funny.  I was doing searches and modifying the URL search parameters when I got this message:

http://book.sunwing.ca/cgi-bin/calendrier-lowest-price.cgi?gateway_dep=YYZ&dest_dep=All%20Countries%20xxx1_2_5_7_8_9_10_12_13_14_17_18_24_25_26_27_29_36_39_44_59_63_64_69_70_73_76_77_83_148_156_226_248_1808_1843_2488_2974_4244_568522_568546_569962_1341400_1899141_3049105_3049109&no_hotel=&date_dep=20170901&duration=7DAY&star=5&searchtype=PA&code_ag=rds&alias=btd&language=en&nb_adult=2&nb_child=2&non_adult_forf3=5&non_adult_forf2=7&family=Y

This kind of allows me to do super searches, but I guess they don’t like it.

 

September 21, 2017 Uncategorized

Funny AI Book Recommendation

On Goodreads:

Because I liked a book about Bill Clinton published in the 90s, I might like an advanced poker book?!  Funny thing is I’m pretty sure I’ve read that book too, so it is a good suggestion.

June 9, 2017 Uncategorized

LSTM Cells

I haven’t quite got to LSTM in my book yet, but they were talkinga bout them in the TensorFlow Dev Summit.
I found this neato resource

The LSTM’s really remind me of logic gates.  Especially flip-flop, where they put together some NOR gates to make memory.

Using a series of LSTM’s might be good for parsing the entire bidding auction.  The bidding is kind of like a sentence.

February 17, 2017 AI Bridge Project

TensorFlow Summit

This tool is pretty neat.  It lets you visualize your data.

Data Map

It would be neat if I could map different bridge hands into 3D space for display

February 16, 2017 AI Bridge Project

Neat Images

I thought this article had some neat images!

http://www.wired.co.uk/gallery/machine-learning-graphcore-pictures-inside-ai

 

 

AI Bridge Project

TenserFlow 1.0

Looks like there is a new release of TensorFlow.  I see there is an 8 hour video!  Looks like good watching, perhaps this weekend?

https://events.withgoogle.com/tensorflow-dev-summit/watch-the-videos/#content

AI Bridge Project

Well I guess someone beat me to the punch

I found an amazing journal entry explaining exactly what I wanted to do!  They even did a better job than I was thinking.

https://arxiv.org/abs/1607.03290

They got around the ‘double dummy’ by dealing 5 hands.  A pretty good idea.  They also used gradient decent based on the score.  I wonder if they are calculating double dummy throughout their learning loop.

A couple of things that could be added, they don’t seem to take into consideration opponent’s bidding nor vulnerability.  They also didn’t partition their DNN based on suits, which I think would give the model a big head start.

But really cool work.

 

AI Bridge Project

Deep Learning

This Deep learning book is turning out to be very technical.  But it is terrific!  It explains in technical detail exactly what each function does and the reasoning behind it.

My purpose of learning this deep learning is the develop a bridge bidding system using raw data.  Essentially, don’t worry about human conventions at all.  Also, don’t tell the computer it has a partner, let it just find the best score based off the information it has.

This insight has allowed me to think of other problems.  Eg, When solving double dummy hands, the max number of tricks you make, might be different than the goal of number of tricks you need.  Thankfully the deal program can tell you how many tricks you can take via a greedy line, and how many tricks you can take via a conservative line.

I think the ideas of bidding conventions might be a bit complex.  It might be best to have a computer actually bid what they think they can make.  I obviously can’t create machine learning for each bidding sequence.  So I think the computer should only take the last bid into consideration.  What happens if the opponents are playing a conventional bid, then it would be stupid to try to infer anything other than that could be the last bid they make!  I imagine it will emerge that all the computer agents have a huge advantage to bid what they actually have.  Both for descriptive reasons and for pre-emptive reasons.

A shortcut I could use is once I have a deal, do learning with each hand.  But this might really bias the data so I think I won’t do that. (Eg. 1 deal = 4 learning iterations)

Besides the book talking about softmax functions,which I think I can use to assign a probability of making each contract times the ‘expected value’, the scored reward of actually making the contract.

The model that seems to be pretty universal is a DNN with linear regression on the inputs, a convolution network on each suit, and a couple of relu layers.  Then a softmax expected value.  It would be neat to use gradient descent on the calculated score of the different bids.  But I’ll likely just use the default L2.

So that is my best beginner approach.

February 14, 2017 AI Bridge Project

Next on the list

A neat primer!

https://cloud.google.com/blog/big-data/2017/01/learn-tensorflow-and-deep-learning-without-a-phd

 

 

February 3, 2017 AI Bridge Project

First bidding Tensorflow

Success!

Success!

So, I gave my neural network 2 hands, the first one it says I should pass and the second I should bid 18.
ha!

What is all that red stuff? I guess the syntax in the example is depreciated. I’ll have to see if I can fix that.

February 1, 2017 AI Bridge Project