openCV, ubuntu 14.04, and researching robot portraiture techniques for Robot Art Competition 2016

For the robotart.org competition, with a ton of help, I’ve gotten a 6 axis arm to work (it accepts x,y,z coordinates).

I configured openCV on ubuntu like so: http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/

(has a few steps tacked onto the official linux installation instructions to hook everything up properly, which is explained on the site and I’ll just make a quick note of below:

sudo gedit /etc/ld.so.conf.d/opencv.conf
  • /usr/local/lib
sudo ldconfig
  • PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
  • export PKG_CONFIG_PATH

) (openCV takes up a decent chunk of space!)

I haven’t used C++ before, though I’ve used C. To run some examples, such as this hough transform:

https://github.com/Itseez/opencv/blob/master/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp

$ g++ houghdemo.cpp -o app `pkg-config --cflags --libs opencv`

This creates a file called “app” which can be run like so:
$ ./app face4.jpg

Man, canny edge detectors on faces look super creepy and not at all recognizable. Also, I don’t think Hough is the way to go for recognizable faces.

Screenshot from 2016-01-14 17:16:26

Alright, so what are some alternative approaches?

stroke-based approach?

I started looking into the general problem of “stroke-based approaches” which will probably be useful later. Here is a good SIGGRAPH paper from 2002 I was skimming through, supplementing with youtube videos when I got confused:

http://web.cs.ucdavis.edu/~ma/SIGGRAPH02/course23/notes/S02c23_3.pdf

http://stackoverflow.com/questions/973094/easiest-algorithm-of-voronoi-diagram-to-implement

I found this comparison of lines only versus allowing different stroke widths (still one color) very compelling:

http://www.craftsy.com/blog/2013/05/discover-the-secrets-to-capturing-a-likeness-in-a-portrait/

It would be interesting to get different stroke widths involved. I wonder what the name of the final photoshop filter (that reduced the image to simple shapes) isInkscape’s vector tracing of bitmaps is (according to the credits) based on Potrace, created by Peter Selinger:

potrace.sourceforge.net

Alright, not quite doing the trick..

portraits

Okay, there’s this sweet 2004 paper “Example-Based Composite Sketching of Human Portraits” that Pranjal dug up.

http://grail.cs.washington.edu/wp-content/uploads/2015/08/chen-2004-ecs.pdf

They essentially had an artist generate a training set, then parameterized each feature of the face (eyes, nose, mouth), and had a separate system for the hair. The results are really awesome, but to replicate them, I’d have to have someone dig up 10 year old code and try to get it to run; or generate my own training set, wade into math, and code it all by myself.

Given my limited timeframe (I essentially have two more weeks working by myself), I should probably focus on more artistic and less technical implementations. Yes, the robot has six axis, but motion planning is hard and I should probably focus on an acceptable entry or risk having no entry at all.

Stippling

An approach that would be slow but would probably capture the likeness better would be to get stippling working. Evil Mad Scientist Labs wrote a stippler in Processing for their egg drawing robot, which outputs an SVG.

http://www.evilmadscientist.com/2012/stipplegen2/

http://wiki.evilmadscientist.com/StippleGen

Presumably the eggbot software converts the svg to gcode at some point.

http://wiki.evilmadscientist.com/Installing_software

If I can’t figure out where there generator is, or want to stick to python, there seems to be PyCAM

Todo

read http://blog.otoro.net/2015/12/28/recurrent-net-dreams-up-fake-chinese-characters-in-vector-format-with-tensorflow/

Calligraphy

First, inspiration. 250 year old writer automaton, you can swap out cams to change the gearing and change what it writes. Crazy!

okay, so more modern. this robot arm uses the Hershey vector fonts to draw kanji.

https://en.wikipedia.org/wiki/Hershey_fonts

Turns out there is an open source SVG font for Chinese (Hershey does traditional chinese characters, not simplified) so now I can write messages to my parents :3  if I can convert the svg to commands to the robot.

https://en.wikipedia.org/wiki/WenQuanYi

Haha, there is even a startup that writes notes for you…

http://www.wired.com/2015/02/meet-bond-robot-creates-handwritten-notes/

Evil mad scientist labs has done some work on the topic.

http://www.evilmadscientist.com/2011/hershey-text-an-inkscape-extension-for-engraving-fonts/

https://github.com/evil-mad/EggBot/

This seems tractable. The key step seems to be SVG to gcode. Seems like I should be able to roll my own without too much difficulty or else use existing libraries.

misc. videos

^– hi again disney research

impressive with just derpy hobby servos

and of course, mcqueen

 

 

hi 2016 (2 servo drawing robot arm, tripod gait 12 servo hexapod, visit to NASA, quadcopter tuning, etc.)

hm, haven’t updated in a while.

i built a lot of robots with parents over the winter break. i built a robot arm and refreshed on inverse kinematics; more specifically, make sure your servos are rotating as you expect: IK goes counterclockwise since angles increase that way, but your servos may increasing in a clockwise direction… a simple map(theta, 0, 180, 180, 0) will fix your problem if you catch it.

2016-01-01

processing takes in x,y coordinates drawn on the screen and spits them out to arduino over serial, which does the inverse kinematics and spits out the theta values to the servo

https://github.com/NarwhalEdu/CopyCat/blob/master/Code/basicsIK/basicsIK.ino

or for the one where it draws what you draw on the screen, https://gist.github.com/nouyang/b312b9ea5c67baa0c914

also tried to face.

it does not face well, in part i have derpy three year old code

face3

this processing code takes a lot of processing libraries. thresholds image, performs canny edge detection, then a walking algorithm (look at each black pixel by scanning image in x and y, see if neighbors are black as well, then walk along that pixel) to turn the edges into vectors. then output to robot, but robot is limited in resolution (arduino servo library) and cheap hobby servo overshoot.

below you can see preview in python.  (basic code, I basically copied the output from processing into a text file and  added some python code to that to plot the values)

is to check image is within the working envelope of the arm. IK is fixed with arm “up”.

face2

faceservo

problem of walking algorithm: adds a box around the image. irritating. need to rewrite code. looking into open cv.

i also rehashed my hexapod project with 12 servos and popsicle sticks

hexapod

basically this https://github.com/nouyang/18-servo-hexapod/blob/master/arduino_may13_2011.pde

but modified to work with the servo configuration on the rectangular robot, and added code to allow you to step through the gait with “j” and “k”: https://gist.github.com/nouyang/d9b6474e3ee412b9b05b

need to implement the other gaits; also, this moves so smoothly, envious, but they have lasercutter :3

worked on quad, now stuck at calibration stage 😡 because i have not built quad before, i could not push through this in a day or two unlike the drawing arm and hexapod.

quad

 

made from a sad clothes drying rack we took apart

 

 

 

transmittercable

we couldn’t find the original cable for the transmitter, so we connected the ports up with a FTDI -> USB cable as per http://psychoul.com/electronics/how-to-make-your-own-usb-cable-for-hk-t6a-calibration

zero

used http://www.sgr.info/usbradio/download.htm and calibrated my servos to zero… took a while to realize it *can* and *should* read the current values, guess my wires were loose, but the values because a lot easier to input. used the kk2 screen to fix some controls that were reversed from what the kk2 expected (left = left and not right, etc.). zeroed all the values on the kk2. turns out (minus the flipping controls) I could zero just as well on using the trim knobs on the controller itself.

went to visit NASA space museum in houston. they had little robot that made and served you froyo. adorable.

nasaicecream

also, some regal looking hexapods in the actual NASA workplace.

 

nasarobot

at MITERS I got a robot arm working with lots of help from MITERS / London Hackerspace / john from BUILDS. For robot arm competition. http://robotart.org/

i’m now robot art-ing. here is using Fengrave on a black and white image with appropriate offsets to produce gcode (well, limited to G0 and G1 commands)

fengrave

robotdrawing

face code still derp. (streaks are because i wrote gcode translator, and it goes to x,y,z position instead of x,y and then z). too many x,y points. draws slowly.

face

michael made crayon extruder (=metal tube + power resistor) and also pen mount. crayons = hard to control flow rate. started making square, then pooped out a lot of melted crayon. alas.

crayonextrusion

learned a lot of patience dealing with old manuals, 20 year old operating systems / controllers. main issue turned out to be a dumb calibration assumption (robot had arrows; should have ignored them and used indentations instead).

https://github.com/miters/gdmux gcode -> V+

also, i learned about oscilloscope rs232 decoder! had to invert to get it working properly (zeros are high in rs232?). scope ground, tx line. bam, now you can check whether you are actually transmitting all the carriage return and line feeds you need…

2016-01-13

currently: reading up on image processing. openCV. http://web.cs.ucdavis.edu/~ma/SIGGRAPH02/course23/notes/S02c23_3.pdf

terse update. more details available if questions exist.

many thanks to my parents for being excited and not jaded