Ruzzle Solving – The nerd way ..

I’ve been working on this project in my spare time for about three weeks and even though is not yet completed I decided it was time to share something. The idea was to realize a robot that could play Ruzzle versus an human player leaving him no possiblity to win. After a few days of brainstorming I decided to start writing a little of code in Python: In a few hours I managed to obtain all the right words performing a depth-first search and comparing them with a dictionary but I felt that was not enough so I asked myself: Why not use OpenCV for image processing combined to Tesseract for letter recognition ?

camera_frame

The image as captured by the camera

The first step was to isolate the smartphone’s screen: as the outermost frame of Ruzzle is blue, I decided to convert the image to HSV in order to perform a better filtering. Using OpenCV is quite simple to achieve that:

import cv2
import numpy as np
..
BLUE_MIN = np.array([90, 150, 50],np.uint8)
BLUE_MAX = np.array([110, 255, 255],np.uint8)
hsv = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
threshed = cv2.inRange(hsv, BLUE_MIN, BLUE_MAX)
threshed

The image after HSV filtering

Using the inRange function we only take care of the pixel which values are in range between BLUE_MIN and BLUE_MAX making easily to find the screen edges. The following code snippet looks for the square with the biggest ares which is, in our case, the screen perimeter.

contours, hierarchy = cv2.findContours(threshed,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
max_area=0
max_square=None
for cnt in contours:
    appr=cv2.approxPolyDP(cnt,0.08*cv2.arcLength(cnt,True),True)
    if len(appr)==4 and cv2.contourArea(appr)>20000 and cv2.contourArea(appr)max_area:
            max_area=cv2.contourArea(appr)
            max_square=appr
screen_detected

Screen edges found !

After computing the transform matrix we can finally apply a perspective transformation to crop the warped image

approx=rectify(a)
h = np.array([ [0,0],[319,0],[319,459],[0,459] ],np.float32)
transform = cv2.getPerspectiveTransform(approx,h)
warp = cv2.warpPerspective(original,transform,(320,460))
warp

The cropped image, almost ready for OCR

Now we can finally crop out the character to feed Tesseract. I found Tesseract to be very inaccurate if the images present some sort of noise or unexpected shapes so I had to find the bounding rectangle of every char and add an outside border in order to let the OCR motor to recognize them with great accuracy.

SIDE=66
SPACING=10
BORDER=5
TOP_LEFT=(123,9)
for i in range(4):
    ii=TOP_LEFT[0]+((SIDE+SPACING)*i)
    for j in range(4):
        jj=TOP_LEFT[1]+((SIDE+SPACING)*j)
        image=img[ii+BORDER:ii+SIDE-BORDER,jj+BORDER:jj+SIDE-BORDER]
        thresh=np.copy(image)
        thresh = cv2.adaptiveThreshold(thresh,255,1,1,11,2)
        contours = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[0]
        maxarea=0
        char=None
        for cnt in contours:
            approx = cv2.approxPolyDP(cnt,0.001*cv2.arcLength(cnt,True),True)
            if cv2.contourArea(approx)>maxarea:
                maxarea=cv2.contourArea(approx)
                char=approx
        x,y,w,h = cv2.boundingRect(char)
        resized=image[y:y+h,x:x+h]
        res = cv2.copyMakeBorder(resized,20,20,20,20,cv2.BORDER_CONSTANT,resized,(255,255,255))
        letters.append(res)
1

Cropped letter

res_bound1

Bounding rectangle

res1

Cropped letter with border

screen-capture

Found words

3 thoughts on “Ruzzle Solving – The nerd way ..

Leave a Reply

Your email address will not be published. Required fields are marked *