Visiting a session about OpenStreetMap at Barcamp Graz 2012 inspired me to make an iPhone app which shows me restaurants, bars etc. in which smoking is not allowed (yes, this is still an issue in 2012).
Getting the Data
OpenStreetMap nodes are filled with useful meta information, one of them being the “smoking” tag which indicates whether smoking is allowed, not allowed or only allowed in a separated area. So first we have to get the data from somewhere. Because downloading the whole planet is a bit much, I downloaded only the Austria file (about 320 MB compressed / 4 GB uncompressed) from http://download.geofabrik.de/osm/europe/.
The raw file contains all available nodes, ways and relations and their meta data. As I am only interested in nodes containing the smoking tag, I somehow needed to filter the file to get a much smaller file containing only the necessary data. This can be done with the command line tool osmosis (install instructions). To make life easier I also used OSMembrane as a GUI on top of osmosis. The configuration I used was read-xml -> node-key filter -> write-xml:
The node-key filter was configured with “Task = node-key” and “keyList = smoking”, which results in all the nodes containing the tag “smoking”.
When you run this pipeline, the result is a 234kb osm file containing standard XML. A sample node from the file looks like this:
<tag k="addr:city" v="Graz"/>
<tag k="addr:country" v="AT"/>
<tag k="addr:housenumber" v="17"/>
<tag k="addr:postcode" v="8010"/>
<tag k="addr:street" v="Zinzendorfgasse"/>
<tag k="amenity" v="pub"/>
<tag k="contact:email" v="firstname.lastname@example.org"/>
<tag k="contact:fax" v="+43 316 58 14 77"/>
<tag k="contact:mobile" v="+43 699 10 350 350"/>
<tag k="contact:phone" v="+43 316 22 50 53"/>
<tag k="internet_access" v="wlan"/>
<tag k="name" v="Propeller"/>
<tag k="old_name" v="Schuberthof"/>
<tag k="opening_hours" v="09:00-02:00"/>
<tag k="smoking" v="isolated"/>
<tag k="source" v="survey"/>
<tag k="website" v="http://www.propeller.co.at/"/>
As you can see it contains a lot of useful data, e.g. name, address and other contact information of the facility. The most important data for our app are of course the geolocation (lat/lon) and the “smoking” tag.
The exported file can now be used in an iPhone app by parsing it with the NSXMLParser class, for example. The geolocation can be used to place pins on a MKMapView. Smoking restrictions of the facilities can be visualized using custom map annotations. Using the other meta data it is also possible to show detailed information about a facility (and make phone calls directly within the app). A simple app using this data could look like this:
Please contact me if you need further details or require source code.
I like to have a clean, non-distracting desktop wallpaper. For a long time now I have been using a solid grey background (57% white). Inspired by Matt Gemmel’s subtle UI texture tutorial I made a grainy wallpaper that has the same non-distracting nature as a solid color, but is more “yummy” than pure grey. The archive contains the wallpapers optimized for 7 different screen resolutions.
Did you ever get stuck when trying to come up with new game ideas? Try out my little tool I put together:
It generates new game ideas by randomly choosing from genre, visual style and other parameters. You can also preselect each parameter, e.g. if you just want to make 8bit 2D games.
Feel free to use it, share it or do whatever you like with it!
I am currently developing my first app on Android, which can be quite frustrating as an iOS developer (more on that in another post perhaps).
In this post I will focus on how to develop a multi-touch enabled musical instrument app with low latency playback of audio samples.
The app should be able to play about 50 different sound samples, at least 5 of them simultaneously. Playing a sound should be instant, so low latency is required.
Android Sound Libraries
Android offers three different sound APIs: MediaPlayer, SoundPool and AudioTrack. MediaPlayer is for playback of longer audio files and video, so this API is not suitable for our needs.
SoundPool’s documentation tells us that it should be used for repeated, simultaneous low-latency playback of multiple short sound samples. That’s exactly what we need! Unfortunately it turned out to be very laggy when playing. So the last resort was Android’s low level audio API AudioTrack.
To play a sound with AudioTrack, the file has to be read into a buffer which is then pushed directly to the audio hardware. Playing multiple sounds simultaneously requires a thread for each sound.
Playing a Sound
To encapsulate audio playback, I created the AudioTrackSoundPlayer class:
The two members are a HashMap to store the threads of the currently playing sounds and a reference to the activity context which is needed later to create AudioTrack objects.
PlayThread thread = new PlayThread(note);
public void stopNote(String note)
PlayThread thread = threadMap.get(note);
if (thread != null)
When playing a note a new thread is started and put into the map. Notes are only played if they are not already playing. Stopping a note (if the user releases the button) fetches the thread out of the map and requests it to stop.
To check if a note is already playing we simply have to look for the kay in the map.
Now to the fun part: the PlayThread class.
The class stores the note it plays, a stop flag which is set when the thread should be terminated (and thus stop playing the sound) and the AudioTrack instance used to play the sound.
String path = note + ".wav";
AssetManager assetManager = context.getAssets();
AssetFileDescriptor ad = assetManager.openFd(path);
long fileSize = ad.getLength();
int bufferSize = 4096;
byte buffer = new byte[bufferSize];
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 22050, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
First the file name is constructed by adding “.wav” to the note. Then the file size is read by opening the file with AssetManager. Next we create a 4096 byte buffer to store the audio data. Adapt the buffer size to your needs. 4096 seemed to work best for me. The AudioTrack object is then created with the appropriate configuration, depending on the sample files you use.
InputStream audioStream = null;
int headerOffset = 0x2C;
long bytesWritten = 0;
int bytesRead = 0;
while (!stop) // loop sound
audioStream = assetManager.open(path);
bytesWritten = 0;
bytesRead = 0;
audioStream.read(buffer, 0, headerOffset);
// read until end of file
while (!stop && bytesWritten < fileSize - headerOffset)
bytesRead = audioStream.read(buffer, 0, bufferSize);
bytesWritten += audioTrack.write(buffer, 0, bytesRead);
catch (IOException e)
Next audio playback of the AudioTrack object is started. Now we have to continuously feed audio data to the object. In a standard WAVE file the raw audio data starts at offset 0x2C, so we skip the header. The outer while loop is used for looping the sound sample indefinitely. The inner while loop reads the actual audio data and feeds it to the AudioTrack object until the end of the file is reached. If the stop flag is set, the loop stops execution and the thread is terminated. The requestStop method sets the stop flag.
stop = true;
The AudioTrackSoundPlayer class gives us a very simple interface for usage in musical instrument apps. Simultaneous low-latency playback of sound samples using Android’s AudioTrack API can be done quite easily by spawning threads for each sample. I hope this saves you some of the trouble I went through while figuring out Android audio.
To further improve the class, additional configuration settings could be added as well as support for compressed file formats like mp3 or ogg. Currently only PCM data (WAV) can be played.
The complete project including the code used for multi-touch handling can be downloaded here: PianoTest.zip.