ENSCI Workshop: Big Data and Quantified Self

app prototype

Last week I took part to a really great workshop (You can find a storify about it here). The principle was simple: 5 days to make a prototype of a mobile webapp using the Big data accessible through APIs. Since I never did any web programming work this was a quite exhausting week (damn you JavaScript and your unclear system of variable definition!).

The workshop was animated by Geoffrey DORNE and Florent DELOISON . Geoffrey has a really great website (in French) about graphic design and Florent a blog about all the cool stuff he is making (in both French and English). 

web.png

I mostly worked with the Twiiter API. My project was an application allowing you to locate the closest tweet with a user-choosed hashtag, locate it and then help the user find it by displaying a compass.  I thought this could be fun to use in marketing, something like "First one there get free food " or to meet people in your city: "If you find me, I pay you a drink".

I couldn't fully finish the project in the timespan. I had a lot of problem with multibrowser javascript support. Parts of my code worked on one browse, parts on another (I understand the pain of web developper and this story much better). In the end I came up with one app working on my laptop, witch searched the closest tweet with a specific Hashtag, and one app on Chrome Ipad witch displayed a compass leading you to coordinates entered by the user. If I find some time I might try to work on it with the help of the StackOverflow people and make one fully functioning app.

 

 

Anyway, here are the best piece of code on the subject: 

First to implement the compass heading, you can access your Ipad and Iphone compass through this piece of code  (sorry android users). It's a real compass heading north, not just gyroscope which is pretty great. 

To get geolocated tweet here is a cool Github repository. It displays tweets around you using your device geolocation.

To search through Hashtags, here is another Github repository.

Finally a nice piece of code for Processing that I found, witch allows you to simply use Twitter4J to post message. You got to be registered as developer on Twitter website (make sure you register you application as Read and Write to post).  

Well this is it, I hope to make another post soon to talk about my fully finished and multibrowser-compliant webapp! 

 

Drawing with the Shapeoko

I was having trouble at my drawing course (I'm really crappy at drawing) so I figured out, why not use my shapeoko to draw?

drawing with shapeoko

I made a simple pen holder in the place of the dremel, and decided to use half-tones to render images. For this, I downloaded the very cool Scriptographer plugin for Illustrator. It allows you (using the Object Raster script) to do an half-tone version of any images using any vector graphic you want! 

To convert this to g-code, I tried to used pycam but it was way to slow. So I used the great 2.5D gcode creator Partkam. It supports SVG and works no matter which OS you use since it's Flash! With the "follow path" setting I was able to draw my Illustrator file.

(sorry for the crappy quality, I don't have a camera so I had to do it with Photobooth)

SimpleOpenNI x OpenCV on Processing

So this afternoon, I had some time at the school, and I really didn't feel like doing anymore illustrator presentations (I came to realize it is a really important side effect of being in a design school). A few days ago I came across a really great work about image projections on objects. As you can see on this video, their results are quite impressive:

Since I had access to both a Kinect and a projector I thought this could be a great little project. I didn't really get through, but I had time to write a little Processing snipplet which may come in handy: it goes through the Kinect depthmap , converts the point in a specific depth zone (say the top of your desk) into a black and white image, which is processed by the OpenCV blob detection. In the end, it project on the objects on your desktop the rectangle containing their shape. Nothing very complicated, but since the conversion from a Kinect object to an OpenCV object isn't obvious, it might be useful (didn't take any pictures, sorry!).

Get the code here! (you need SimpleOpenNI and OpenCV to make it work)

Dyskograf

Here is a creation by Avoka: Diskograf.

It's a turntable hacked with what I guess is a video camera to read the tracks that you draw on paper discs

Transient

It's a really great project, and it is funny how the origin of the idea is the same that the one of my project: show what a loop is through the rotation of a circle.

This made me think though: it could be possible to really simplify this object to offer a simple replacement  turntable arm which could instantly transform any turntable into a graphic beatbox.

Using a reflectance sensor array linked to an arduino which would output different sounds anytime a sensor is detecting a reflectance spike would be enough!
 It would work kind of like this graphic robot! (you can "program" it by drawing on control disks, kind of the same way we use to program computer with punch cards)

To conclude,

No, real programmer draw on a disc!

The making of Stifeo

Here is an article I stumbled upon a few months ago and that I still re-read from time to time. This article is explaining the work that the Stifeo did for their second-generation product.

Transient

If you don't know what Stifeo are, those are cubes with a little touch-screen on it and a few sensors inside on which you can play a mix of traditional and video games. What is really great in this project is how the games manage to create the perfect mix of physical and digital!

This article , written by Elizabeth Scott, an engineer of the Stifeo team, really is a comprehensive and documented report of their work. It's everything an engineering course should be! You can follow their quest for the perfect solution, their doubts, and their flash of genius which really was necessary to face such a challenge.

The most interesting part is the one about how they had to cope with limitation of their hardware while writing the software. In the D.I.Y. community, its not really common to see these issues arise. We tend to work with overpowered prototyping board and most of the time, the efficiency of resources use isn't the main concern. CPU-power is now cheaper than the intelligence necessary to reduce its need. The Stifeo engineer had to take their inspiration from the work made by Nitendo engineer, work they had to do to cope with the limitations of the NES, the GameBoy or the Super Nitendo.

To conclude, it's really worth the long read, there is a lot to learn and its a reminder of how amazing can be this engineering work!