How I built EventMe – My Udacity Success Story

Ever wished that you had an app that will have all the events happening around you at your finger tips? If yes, the answer is EventMe.


This was my final project for Udacity’s Android Nanodegree program. The Capstone Project phase helped me showcase my idea as a working Android app.

App Summary

EventMe determines events happening within a defined radius of the User’s current location. The intended User is anyone who wishes to learn about events happening in a given area. People who have just moved to a new town might find this app useful. Also, Uber/Lyft drivers who want to know where demand might be high for rides could also find this app useful.

Want to try it out?

EventMe GitHub Repo

How to develop an Android Audio Playback App

I thought I was done developing my Music Player app until I hit upon few scenarios that were  a bit embarrassing. In hindsight, it was obvious. You will hate your app if you don’t pause audio playback when the phone rings. So how do you handle such a scenario? “Request the Audio Focus”.

Audio focus refers to the assumption that only one thing should be playing at a time. Which means that the system needs a way to track which apps are currently playing audio. Audio focus goes by the adage  — when you’re holding it, you get to speak. But the system will take it away when it’s someone else’s turn.

You can manage audio focus for your app with AudioManager. When you’re ready to play something, you simply request it (and when you’re done, remember to release it). When audio focus is granted, you can have your playback. But, as I said, the system may take the audio focus back, either temporarily or permanently. So you need an OnAudioFocusChangeListener to keep track of your status and react to those changes. Wondering, how to put all this together, here’s some demo code that essentially translates “english” colors to a “regional” language pronunciation.

public class AudioActivity extends AppCompatActivity {

    /** Handles playback of all the sound files */
    private MediaPlayer mMediaPlayer;

    /** Handles audio focus when playing a sound file */
    private AudioManager mAudioManager;

     * This listener gets triggered when the {@link MediaPlayer} has completed
     * playing the audio file.
    private MediaPlayer.OnCompletionListener mCompletionListener = new MediaPlayer.OnCompletionListener() {
        public void onCompletion(MediaPlayer mediaPlayer) {
            // Now that the sound file has finished playing, release the media player resources.

     * This listener gets triggered whenever the audio focus changes
     * (i.e., we gain or lose audio focus because of another app or device).
    private AudioManager.OnAudioFocusChangeListener mOnAudioFocusChangeListener = new AudioManager.OnAudioFocusChangeListener() {
        public void onAudioFocusChange(int focusChange) {
            if (focusChange == AudioManager.AUDIOFOCUS_LOSS_TRANSIENT ||
                    focusChange == AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK) {
                // The AUDIOFOCUS_LOSS_TRANSIENT case means that we've lost audio focus for a
                // short amount of time. The AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK case means that
                // our app is allowed to continue playing sound but at a lower volume. We'll treat
                // both cases the same way because our app is playing short sound files.

                // Pause playback and reset player to the start of the file. That way, we can
                // play the word from the beginning when we resume playback.
            } else if (focusChange == AudioManager.AUDIOFOCUS_GAIN) {
                // The AUDIOFOCUS_GAIN case means we have regained focus and can resume playback.
            } else if (focusChange == AudioManager.AUDIOFOCUS_LOSS) {
                // The AUDIOFOCUS_LOSS case means we've lost audio focus and
                // Stop playback and clean up resources

    protected void onCreate(Bundle savedInstanceState) {

        // Create and setup the {@link AudioManager} to request audio focus
        mAudioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);

        // Create a list of words
        final ArrayList<Word> words = new ArrayList<Word>();
        words.add(new Word("red", "weṭeṭṭi", R.drawable.color_red, R.raw.color_red));
        words.add(new Word("mustard yellow", "chiwiiṭә", R.drawable.color_mustard_yellow,
        words.add(new Word("dusty yellow", "ṭopiisә", R.drawable.color_dusty_yellow,
        words.add(new Word("green", "chokokki", R.drawable.color_green, R.raw.color_green));
        words.add(new Word("brown", "ṭakaakki", R.drawable.color_brown, R.raw.color_brown));
        words.add(new Word("gray", "ṭopoppi", R.drawable.color_gray, R.raw.color_gray));
        words.add(new Word("black", "kululli", R.drawable.color_black, R.raw.color_black));
        words.add(new Word("white", "kelelli", R.drawable.color_white, R.raw.color_white));

        // Create an {@link WordAdapter}, whose data source is a list of {@link Word}s. The
        // adapter knows how to create list items for each item in the list.
        WordAdapter adapter = new WordAdapter(this, words, R.color.category_colors);

        // Find the {@link ListView} object in the view hierarchy of the {@link Activity}.
        // There should be a {@link ListView} with the view ID called list, which is declared in the
        // word_list.xml layout file.
        ListView listView = (ListView) findViewById(;

        // Make the {@link ListView} use the {@link WordAdapter} we created above, so that the
        // {@link ListView} will display list items for each {@link Word} in the list.

        // Set a click listener to play the audio when the list item is clicked on
        listView.setOnItemClickListener(new AdapterView.OnItemClickListener() {
            public void onItemClick(AdapterView<?> adapterView, View view, int position, long l) {
                // Release the media player if it currently exists because we are about to
                // play a different sound file

                // Get the {@link Word} object at the given position the user clicked on
                Word word = words.get(position);

                // Request audio focus so in order to play the audio file. The app needs to play a
                // short audio file, so we will request audio focus with a short amount of time
                // with AUDIOFOCUS_GAIN_TRANSIENT.
                int result = mAudioManager.requestAudioFocus(mOnAudioFocusChangeListener,
                        AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN_TRANSIENT);

                if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
                    // We have audio focus now.

                    // Create and setup the {@link MediaPlayer} for the audio resource associated
                    // with the current word
                    mMediaPlayer = MediaPlayer.create(ColorsActivity.this, word.getAudioResourceId());

                    // Start the audio file

                    // Setup a listener on the media player, so that we can stop and release the
                    // media player once the sound has finished playing.

    protected void onStop() {
        // When the activity is stopped, release the media player resources because we won't
        // be playing any more sounds.

     * Clean up the media player by releasing its resources.
    private void releaseMediaPlayer() {
        // If the media player is not null, then it may be currently playing a sound.
        if (mMediaPlayer != null) {
            // Regardless of the current state of the media player, release its resources
            // because we no longer need it.

            // Set the media player back to null. For our code, we've decided that
            // setting the media player to null is an easy way to tell that the media player
            // is not configured to play an audio file at the moment.
            mMediaPlayer = null;

            // Regardless of whether or not we were granted audio focus, abandon it. This also
            // unregisters the AudioFocusChangeListener so we don't get anymore callbacks.



How to get started with Google Maps API?

Building Interactive Map layouts has gotten easier with the latest release of Google Maps APIs. Want to build something that looks like:
How does one achieve this sort of a functionality?..simple!! make use of the Street View Service. Let’s look at some access strategies and sample source code.

Directly Accessing Street View Data

You may wish to programmatically determine the availability of Street View data, or return information about particular panoramas, without requiring direct manipulation of a map/panorama. You may do so using theStreetViewService object, which provides an interface to the data stored in Google’s Street View service.

Street View Service Requests

Accessing the Street View service is asynchronous, since the Google Maps API needs to make a call to an external server. For that reason, you need to pass a callback method to execute upon completion of the request. This callback method processes the result.

You may initiate two types of requests to the StreetViewService:

  • Request with a StreetViewPanoRequest, this returns panorama data given a reference ID which uniquely identifies the panorama. Note that these reference IDs are only stable for the lifetime of the imagery of that panorama.
  • Request with a StreetViewLocationRequest this searches for panorama data over a given area, given a passed LatLng.

Street View Sample Source Code


id=”map” style=”width: 45%; height: 100%;float:left”>
id=”pano” style=”width: 45%; height: 100%;float:left”>
* Click the map to set a new location for the Street View camera.

var map;
var panorama;

function initMap() {
var berkeley = {lat: 37.869085, lng: -122.254775};
var sv = new google.maps.StreetViewService();

panorama = new google.maps.StreetViewPanorama(document.getElementById(‘pano’));

// Set up the map.
map = new google.maps.Map(document.getElementById(‘map’), {
center: berkeley,
zoom: 16,
streetViewControl: false

// Set the initial Street View camera to the center of the map
sv.getPanorama({location: berkeley, radius: 50}, processSVData);

// Look for a nearby Street View panorama when the map is clicked.
// getPanoramaByLocation will return the nearest pano when the
// given radius is 50 meters or less.
map.addListener(‘click’, function(event) {
sv.getPanorama({location: event.latLng, radius: 50}, processSVData);

function processSVData(data, status) {
if (status === google.maps.StreetViewStatus.OK) {
var marker = new google.maps.Marker({
position: data.location.latLng,
map: map,
title: data.location.description

heading: 270,
pitch: 0

marker.addListener(‘click’, function() {
var markerPanoID = data.location.pano;
// Set the Pano to use the passed panoID.
heading: 270,
pitch: 0
} else {
console.error(‘Street View data not found for this location.’);

async defer


Machine Learning on Android, is that possible?

I have been toying with the idea of trying out Machine Learning libraries on Android Platform but was always left with the option to build trivial applications given the constraints (processing) and limitations of the Mobile Platform. Until, I chanced upon TensorFlow that had some amazing documentation on how to leverage its Machine Intelligence library on Mobile.

A look at the app in real time:


The app accomplishes this feat using a bundled machine learning model running in TensorFlow on the device (no network calls to a backend service). The model is trained against millions of images so that it can look at the photos the camera feeds it and classify the object into its best guess (from the 1000 object classifications it knows). Along with its best guess, it shows a confidence score to indicate how sure it is about its guess.

Interested more about learning Image Recognition?

Looking for the demo app?

Try the Android build available at I’m sure it will get you excited!



How to Make an iOS App?

In Udacity’s iOS courses, you’ll bridge the gap between the physical and the virtual. You’ll learn to leverage common hardware features on iPhone and iPad, including the camera, microphone, GPS, gyroscope, and accelerometer to create engaging and interactive user experiences. Whether you want to build the app of your dreams or land a job as an iOS developer, you’ll be developing for platforms used by hundreds of millions of people every day.

Welcome to the world of iOS Development! This program will help you get started on your path to creating high-quality iOS apps.
Learn how to build advanced, modern iOS applications with polished user interfaces on top of industry-standard frameworks.

Intro to iOS App Development with Swift

PROJECT Make your first iPhone app
Take the first step in becoming an iOS Developer by learning about Swift and writing your first app.
This course focuses on the syntax of the Swift programming language. By the end of the course, you’ll be able to apply Swift essentials to building iOS apps.
Learn the iOS UIKit framework, which is the cornerstone of creating user interfaces in all iOS apps and crucial for any iOS Developer to be intimately familiar with.
Learn how to incorporate networking into your apps to access data from around the world. Build the On the Map app to share location and fun links with fellow students.
Persisting data is a core skill for any iOS developer. Learn how to store app data to your device’s hard drive using two common techniques: Core Data & NSKeyedArchiver.
In this course, you’ll learn standard methodologies for debugging software, and how to use Xcode’s debugging tools to find and squash bugs.
Learn the process of building an app, taking your ideas from drawing board to App Store!