Plans for the Desk

EDIT AGAIN: Scratch the whole array idea entirely. I just found a method at NUI group that uses photoresistors (i.e. fast scanning), but only requires them along the edge. Even though they are on the edge there are no occlusion problems as one might expect because of some magic with polarizing film. Will post updates soon.

EDIT: Well it seems that the rise time on photoresistors is generally about 60 ms. This is a hit on the response time of the screen. Basically what that means is no matter how fast we sample the matrix, true response is limited to about 16 fps. So the goal is to either find a photoresistor which is cheap and has a rise time of ~10ms -> 100fps, or else think of a new sensing system.

First of all, we (or maybe just I) plan on finishing the first desk by the end of Christmas break, but I have some ideas for the next version of the desk that I'll talk about now. The major improvement I want to make is to the size. The first version of the desk is about a foot and a half deep because of the distance requirement of the camera from the screen. So the first step in making a thin desk is eliminating the camera from the design.

Obviously, we need some sort of visual sensor to replace the camera, so the option we will try is to make a large array of analog light sensors (photocell network) underneath the screen. A micro-controller will then read the value of each sensor sequentially and use an algorithm to deduce the location of blobs. The circuit underneath the screen will look something like this. To select a column, the MCU puts a voltage on one of the top pins, and a high impedance on the others. Then, to read a specific sensor, the MCU puts one of the row outputs through an analog-to-digital converter (using a multiplexer in between).


So far it seems that diodes are required on every photocell in order to prevent current from flowing through the other resistors. I simulated the circuit and it seems that the back-flow was somewhat negligible, especially in larger matrixes, but diodes are cheap, and precision is good, so I will keep them.

The other concern is the cost and difficulty of construction. For the first version of the desk I want to use a 32x24 matrix of photocells, so there are 768 cells that need to be purchased and soldered. So far the cheapest I have been able to find are $0.32 for large volume orders. I would be very glad to find one closer to $0.20, bringing the cost for all resistors down to $150. The diodes and resistors needed don't total to be more than $10-$15, so they aren't a problem. I was thinking I might be able to use my lab's rapid prototyping machine to fabricate the PCB and solder all of the components onto it.

Piano Trainer: Android Progress

Looks a little different now:


I am going to give a little walk through on how I have accomplished everything so far. The simplest way to accomplish the piano roll was to extend the ScrollView class. Here are the major methods that needed writing with some short explanations:

protected void onDraw(Canvas canvas) { // "paint"
 super.onDraw(canvas); 
 // All of our other drawing methods here...  
}

public boolean onTouchEvent(MotionEvent event) {
 // If touched for first time, save the position
 if (event.getAction() == MotionEvent.ACTION_DOWN) {   
  event_start_Y = event.getRawY();
  event_start_Time = current_time;
  return true;
 } 
 // If it is being dragged, find distance and scroll
 else if (event.getAction() == MotionEvent.ACTION_MOVE) {   
  float y = event.getRawY();    
  current_time = ((y - event_start_Y) * 20) + event_start_Time;
  if( current_time > song_duration) current_time = song_duration;
  slider.setProgress( (int) current_time);
  percentage.setText( (int)(100*current_time/song_duration) + "%");
  this.invalidate();    
  return true;   
 }
 // We don't handle anything else
 else {
  return false;
 }
}

The ScrollView was the simplest way to go, but it did not allow for quick redrawing via a thread. Because of the way invalidate() works, you are not guaranteed that the component will be redrawn quickly. Quoting the Android API, "If the view is visible, onDraw(Canvas) will be called at some point in the future."

Instead, what most articles I have read have recommended extending the SurfaceView class and implementing a SurfaceHolder for callbacks. Your thread must then have a reference to the SurfaceHolder of your SurfaceView. From there, you can acquire and lock a canvas, draw to it, and then unlock and repost it to your view. Here are the simplified methods I used:
/* Inside our custom SurfaceView */
public void repaint() {
 // When I want to manually tell it to refresh for use in events
 // from buttons and the slider
 Canvas c = this.getHolder().lockCanvas();
 draw(c);
 this.getHolder().unlockCanvasAndPost(c);
}

class MidiThread extends Thread {  
  
 SurfaceHolder holder; // The holder where we get our canvas
 MidiView view;  // The actual view, which has our drawing methods
  
 public MidiThread(SurfaceHolder h, MidiView2 m) {
  this.holder = h; // Store the holder
  this.view = m; // Store the view

  // Note: you could also just pass the view, 
  // on use view.getHolder() to get the holder
 }
  
 public void run() {
   
  Canvas c = null;   
  long start_time = System.currentTimeMillis();      
    
  do  { 
   try {   
    // Update the time elapsed
    current_time = (double) System.currentTimeMillis() - start_time;
    // Get the canvas to draw to
    c = holder.lockCanvas();
                          
    // Call our custom paint method
    view.onDraw(c);
     
   } catch (Exception e) {
    // In case we can't get the canvas lock
    e.printStackTrace();
   } finally {
    // Give up the canvas and post it to the view
    holder.unlockCanvasAndPost(c);
    // Note: We want to break out here so we don't keep the lock after we are done
    if (!is_playing) break;
   }     
    
  } while ( current_time  < song_duration );  
 }  
}

I found a vertical SeekBar widget from stack overflow which can be found here. All credit to that guy!!! (You rock!) Then here is a simplified version of how my xml file looks with the slider and buttons ( Note: I put underscores in front of the elements because blogger was actually interpreting them as what they were. ):

<_ data-blogger-escaped-encoding="utf-8" data-blogger-escaped-version="1.0" data-blogger-escaped-xml="">
<_linearlayout data-blogger-escaped-br="" data-blogger-escaped-xmlns:android="http://schemas.android.com/apk/res/android">    android:orientation="vertical"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"
    android:id="@+id/main"
    >
        
    <_tablelayout data-blogger-escaped-br="">     android:layout_width="fill_parent"
     android:layout_height="fill_parent"
     android:stretchColumns="1">     
     <_tablerow>
      <_linearlayout data-blogger-escaped-br="">       android:id="@+id/view_holder"
       android:layout_width="fill_parent"
       android:layout_height="fill_parent"
   />
      <_linearlayout data-blogger-escaped-br="">       android:id="@+id/menu_holder"
       android:layout_width="50px" 
       android:layout_height="fill_parent"
       android:orientation="vertical">
       
           <_button data-blogger-escaped-android:id="@+id/play" data-blogger-escaped-br="">              android:layout_width="fill_parent"
              android:layout_height="wrap_content"
              android:text="Play" 
          />
          <_button data-blogger-escaped-android:id="@+id/load" data-blogger-escaped-br="">              android:layout_width="fill_parent"
              android:layout_height="wrap_content"
              android:text="Load" 
          />
          <_button data-blogger-escaped-android:id="@+id/percentage" data-blogger-escaped-br="">              android:layout_width="fill_parent"
              android:layout_height="wrap_content"
              android:text="100%" 
          />
          
           <_com data-blogger-escaped-.midi.miditrainer.verticalslider="" data-blogger-escaped-br="">           android:id="@+id/slider"
           android:layout_width="fill_parent"
           android:layout_height="400px"           
                  android:layout_weight="1"
           />         
      <_ data-blogger-escaped-inearlayout="">
     <_ data-blogger-escaped-ablerow="">   
    <_ data-blogger-escaped-ablelayout=""> 
<_ data-blogger-escaped-inearlayout="">
Anyways, thanks for reading!

Synthesia Clone "Piano Hero": Release

Description: Synthesia is a piano game and trainer written in C++ that builds a piano roll out of a Midi file. Synthesia is extremely helpful for learning new songs quickly to those who struggle with reading sheet music. This clone is the first step towards an Android version of the game to make Synthesia more portable and less of a hassle.


Download: here

Notes: You will need 7zip to extract and the Java Run-time Environment to use. The Midi file included is from Sebastian Wolff.

Known Issues:
  • Changing speed moves position
Near-Near Future Updates:
  • Andriod Support
  • Midi Type 2, 3 Support
Somewhat-near Future Updates:
  • Track Instrament selection
  • More view options
Distant Future Updates:
  • Multiple Keyboard input

Carbonite Clone Birthday Surprise!

My mom's birthday is tomorrow, and I've always wanted to buy her a subscription to Carbonite (one of the more popular back-up-your-files-to-the-cloud services). She's a teacher and has amassed quite a large number of school documents that she prints copies of year to year. So I went to the Carbonite site and was devastated when I found the yearly cost to be $60. Far too steep for my limited income! Of course the only solution was to make a Carbonite Clone.

CarboniteClone is a very simple program. It opens a system tray icon and checks every 30 seconds or so whether any files in a user-specified root directory have changed. If so it uploads these files to an offsite FTP server (also user-specified).  If you put a shortcut in the .EXE to the startup folder in your programs menu it will, of course, load at startup. I used FTPLib as an internal FTP client (so assume I'm releasing this under whatever license fits best with FTPLib). Also note that this is windows only (sorry, but it was just a quickie!)

The tray icon has 3 states indicating that it's in standby, transferring files, or experiencing a connection problem. When transferring files, the icon's tooltip displays a percent complete.

 

There are two text files that concern the user:

  • settings.txt (contains 4 lines for "local root directory", "FTP hostname", "username", and "password")
  • logfile.txt (self-explanatory, but fun to look at)

Please visit http://code.google.com/p/carboniteclone/ for the source code (C++) or a zipped distributable that you can put in Program Files or wherever. Keep in mind that I put this together in a day and the code is extremely messy and completely uncommented.

Hopefully Mom likes the gift!

Apache + Erlang Web Frameworks

I recently started an Erlang project involving grid computation. One requirement is a web interface into the grid. Naturally, the interface would work best if the web app is written in Erlang too. As far as I know the Erlang web frameworks currently available are BeepBeep, Chicago Boss, Erlang Web, ErlyWeb, Nitrogen, and Zotonic. The problem that I have with the previous frameworks is that they're designed to work with Erlang-based servers (namely Inets, Misultin, Mochiweb, and Yaws). I'm running Apache, so out of the box none of these frameworks work on my setup. Normally, bundling a high-capacity server makes sense because if your webapp is in Erlang you probably want the whole system to be robust, distributed, large-scale, etc. However in my case, the web interface to the grid is fairly light and not mission-critical. So this is the ideal framework organization for my setup:
diagram
Creating an Erlang framework that can run under Apache (and most other HTTP servers) is a matter of hosting a CGI bridge to the framework. This bridge then makes a remote procedure call to an Erlang node requesting a page to be constructed. Since a CGI file can be composed in C/C++ it can therefore interface with Erlang nodes using the erl_interface library. Put simply, we're using a small piece of code (a working rough draft is ~200 lines) to connect a web framework written in Erlang with any web server that supports CGI (notably Apache and IIS).

This CGI bridge opens up Erlang frameworks to a host of existing non-Erlang web servers. The major use for this is connecting smaller dynamic websites with larger distributed Erlang applications. Furthermore, in the event of a server crash the larger application persists with only the web interface going down. I've written a working rough draft of the bridge, and Apache is serving pages generated from Erlang. POST and GET data among other server variables are being passed into the framework/webapp stub removing the need for side-effects within Erlang. The next step is seeing whether I can connect this bridge to an existing framework.

Synthesia Clone "Piano Hero": Creating the Roll

Summer Project Numero Uno: Synthesia Clone for Android
In the last post, I explained very badly how to convert MidiEvents in Java to timed data which you can use to create notes. Now I am going to show you some code on how to get a roll using the times in your notes. The basic idea is to have a thread that continuously redraws while keeping track of how much time has passed.

public void drawRoll(Graphics2D g, ArrayList notes) {
A: int key_width = 16;
int key_height = 100;

// Scale falling notes so that a note of length (show_duration)
// will stretch to fit the entire space available
B: double scale_factor = (double)window_height / show_duration;
boolean drawing_black_keys = false;

for(int i = 0; i < 2; i++) {
for(int j = 0; j < notes.size(); j++) {
Note note = notes.get(j);
C: if ( note.end_time < current_time ) { notes.remove(i); continue;}
D: if ( note.start_time > current_time + show_duration) break;

E: if(notes.is_black_key != drawing_black_keys) continue;
F: int x = NoteOffset[note.note_number] * key_width;
G: int y_start = roll_height - (int)((note.start_time - current_time) * scale_factor);
int y_end = roll_height - (int)((note.end_time - current_time) * scale_factor);
int height = y_start - y_end;
if(height == 0) height = 5;
// Remember that positive and negative y is backwards
H: if(note.is_black_key) drawBlackNote(g, x, y_end, key_width, height);
else {drawWhiteNote(g, x, y_end, key_width, height); }
}
drawing_black_keys = true;
}
}
(A) - You should make other methods to draw black keys and white keys and then change the appearance of the keys inside of those.
(B) - window_height is the size of my jframe, change as desired
(C) - Here, we remove notes that we have already passed
(D) - If we have found a note that isn't in view yet, we are done with this loop
(E) - Here, we skip the black keys on the first loop so that they are drawn on top of the white keys
(F) - NoteOffset is actually an Double[], so you should cast the whole line to an int, NoteOffset contains values for each key which contains the number or white keys plus one half times the number of black keys
(G) - role_height is window_height - keyboard_height
(H) - Again, you should make your own methods for drawing keys however you like

I have found it better to draw the static images before hand by creating a BufferedImage and then drawing your notes to the BufferedImage. This way, you don't need to redraw everything in real time and can simply blit pixels to the screen to scroll. To do this:
BufferedImage r = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g1 = r.createGraphics();
drawBufferedImages(g1)

Where drawBufferedImages() draws my keyboard, the background, and the guides. I tried drawing everything before hand to one big BufferedImage only to have Java run out of memory for any song that was longer than one and a half minutes. So I don't recommend doing that.


Then to draw everything, overload paint:
public void paint(Graphics g) {
g1.drawImage( background, 0, 0, background_width, background_height, 0, 0,
background_width, background_height, null);
drawNotePass(g1, active_list);
g1.drawImage( keyboard, 0, background_height, keyboard_width, background_height
+ keyboard_height, 0, 0, keyboard_width, keyboard_height, null);
}
where background and keyboard are my BufferedImage's that have been drawn earlier.

For getting good looking notes and other stuff, use gradients. To make a simple horizontal gradient for a vertical note, set x1 so the left most x-coordinate and x2 the right most x-coordinate. (I recommend not choosing two colors that are extremely different :P )
Color one = new Color(  54, 161, 201); // Random color :D
Color two = new Color( 143, 91, 56 ); // Random color :D
GradientPaint fill = new GradientPaint( x1, 0, one , x2, 0, two );
g.setPaint( fill ); // g is Graphics2D
References: Graphics, BufferedImage, GradientPaint

Synthesia Clone "Piano Hero": Parsing Midi Files

Summer Project Numero Uno: Creating a Synthesia Clone for Android

Background: Synthesia is a piano game and trainer written in C++ that builds a piano roll out of a Midi file. Synthesia was also originally named "Piano Hero" before Activision sent a cease and desist letter telling them to change their name.

Synthesia is extremely helpful for learning new songs quickly (especially if you're slow at reading sheet music like me). However, finding a decent position for a computer near/ontop of your keyboard is very troublesome. And with the recent hype over tablet computers, most of which run android?, getting Synthesia to fit on (the thingy that holds sheet music) is a must.

Midi Files: Midi files are composed of MidiEvents, which generally represents an action such as a Note On, and are organized into tracks, which represent separate streams of MidiEvents. Every event has an associated delta-time stamp, measured in ticks, which determines when it should occur relative to the previous event. In order to convert ticks to seconds, we need to know two more things: the resolution and tempo. The resolution is the number of ticks per quarter note, which I kind of think of as the quality of the midi, and can be found in the file header. The tempo is number of microseconds per quarter note, but most people appear to convert this to beats per minute. The tempo is a little more difficult as it can change during as song. Once we have all of this, converting is some pretty straight forward algebra:
ppqn = 480                   // ticks per quarter note, get from file header
bpm = 60000000 /tempo; // quarter notes per minute, get tempo from MidiEvents
mspt = 60000 /( bpm * ppqn ) // milliseconds per tick
Working with ticks in Java is a little different, because Java automatically converts the delta ticks to cumulative ticks. So events having the following ticks 10, 10, 10 respectively would become 0, 10, 20. Now here's some half-pseudo-code for parsing a single track in a Midi file in Java:
int bmp = 120; // default is 120
int tempo = 0;
int ppqn = 480; // get from file header
int last_tick = 0;

double ct = 0; // the cumulative time
double mspt = 60000.0 / ( (double)bpm * (double)ppqn );

for(int i = 0; i < track.size(); i++) {
MidiEvent event = track.get(i);
MidiMessage msg = event.getMessage();

if(msg instanceof ShortMessage)
switch( ((ShortMessage)msg).getCommand() )
case NOTE_ON:
ct += mstp * (event.getTick() - last_tick);
last_tick = event.getTick();
case NOTE_OFF:
ct += mstp * (event.getTick() - last_tick);
last_tick = event.getTick();
else if(msg instanceof MetaMessage)
switch( ((MetaMessage)msg).getType() )
case 0x51:
ct += mstp * (event.getTick() - last_tick);
last_tick = event.getTick();
tempo = getIntFromByteArray(msg.getData());
bpm = 60000000 / tempo
mstp = 60000.0 / ((double)bpm * (double)ppqn);
For the sake of a piano roll, we only need to worry about these three types of messages. Notice that NOTE_ON and NOTE_OFF are two separate events. This means that if you want to create some kind of Note object, you need to either keep an array of half complete notes or look ahead for the next NOTE_OFF event with the same key number.

One last precaution! The first track in Type 1 Midi files contain all of the tempo events for all of the other tracks and is called the tempo map.

There are three types of Midi files:

Type 0: Everything is saved in one track.
Type 1: Multiple tracks with individual parts on separate tracks.
Type 2: Multiple tracks which represent different patterns. (Not commonly found)

So what I did was to go through the first track and find all of the tempo events and create duplicate events in the rest of the tracks.

Live Backgrounds

As a side project I decided to do something android-related. Since it seems I'm not good at making anything but fun visualizations, I had the idea to make some live backgrounds, which the Android platform now supports. I have a few concept sketches that I'll try to bring into reality. I suspect I'll package 3 live backgrounds together in a free download on the Market and create a paid ($0.50 - $1.00?) download to give access to all future backgrounds that I make. Here's a sample of my first background below. It's been configured two different ways to give different effects.



I am aware that there is some wobbliness to the arcs. There are a few things I noticed that need to be fine-tuned that I can't recall at the moment. This visualization was just rendered on a PC using standard Java constructs. I still have to port it over to the Android platform and use their drawing API. At that point if it's still looking wobbly I'll find a way to work that out. Look for more previews in future posts.

Oceania Plans

We wrote a partial interpreter for Newspeak-on-Squeak by hand in C, adding primitives for registering interest in interrupts and for arbitrary access to memory. On this interpreter, we managed to get Hello World, a clock, and some keyboard input. The approach is no longer attractive because of how time consuming and error prone it would be to translate BitBlt, Balloon, garbage collection and become.

The plan is to take SqueakNOS's copy of VMMaker and add the Newspeak bytecodes (pushImplicitReceiver, pushExplicitOuter, dynamicSuperSend) from Newspeak's copy of VMMaker. Such a VM will require a BlockContext-free image, so we need to untangle the current Newspeak implementation from Squeak 3.9. This is also a goal for the Ministry of Truth so that Newspeak can run on Cog. Such untangling will likely involve improvements in the implementation of the new mirrors and the representation of mixins and various metadata.

Nominal Progress

We bought the domain name "daftgenius.com" and designed our logo!
Hopefully you get it if you've heard of the potato battery (a common kids' science project).


Also we have a few projects coming up in the future which we'll post on soon.
One is a game, and one is an engineering project... Hint: aviation.