I am currently involved in the Qatar Arabic Language Bank (QALB) project at the CMU-Q NLP Lab as a research associate. The project aims to build the largest hand annotated corpus of edits for Modern Standard Arabic (1-2 million words). The corpus will consist of Arabic sentences and their corrections along with other annotation data. Serving as the software engineer of the project, I built, and currently maintain, a web-based annotation framework for the project. The framework consists of an annotation interface (pictured above), an administrative interface, and an API server that powers these interfaces. Our annotation interface is very different from other such interfaces which present a text-editor-like interface, which aren't capable of tracking user actions or producing exact token alignments. We took a token-based approach to our editor. By constraining edit operations (edit, add, move, delete, merge, and split) to individual tokens, we can track the precise alignment of each token and recording each token-based action. We published a demo-paper at the IJCNLP 2013 conference detailing the design of our framework.
At the beginning of my sophomore year at CMU-Q, I joined the HALA Roboceptionist project as a research assistant. HALA (pictured above) is a conversational, bilingual (speaking Arabic and English), robotic receptionist at CMU-Q that helps visitors to query for information such as directions and events. During that time, the team was working on a new face for HALA which had a more realistic face animation system. My research focused on discerning the list of mouth shapes (or visemes) that are particular to the Arabic language in order to lip-sync HALA. This included researching Arabic phonemes, designing experiments that captured their full range, filming participants while they were uttering all the different Arabic phonemes, and then identifying similar mouth shapes.
I developed an iOS app on my free time to simulate the Gameboy Camera, my first digital camera. Below is a screenshot of the app's interface, but you can find out more on pixl8r-app.com.
In 2014, two friends and I participated in the Koding hackathon. The theme we chose was 'Challenges associated with real time communication and translation (Star Trek universal translation anyone?)'. We developed jitalk.me, a messaging app that translated text in (almost) any language into graphical symbols. The idea was that symbols, though not always, could transcend linguistic barriers. Though very crude, jitalk.me was able to produce some surprising results when trying different phrases in different languages. We used Firebase as the chat back-end, NLTK and Google Translate for text processing, and bottle.py for the server.
During my senior year, I freelanced for Rhys Himsworth, the director of painting and printmaking at Virginia Commonwealth University in Qatar (VCU-Q), working on interactive art installations for the 'Entropy' exhibition. There I was responsible for designing and assembling the electronics and writing the driving software for one of the installations, as well as help assembling and debugging some of the others. Below is a video of the exhibition.
The major installation I worked on for 'Entropy', called 'An Elusive Truth', consisted of 80 speakers, each connected to an individual text-to-speech chip, all powered by two Arduino Megas. The Arduino Megas were connected to a PC which retrieved up-to-date news articles from a given list of RSS feeds. Once retrieved, each news article title was assigned to a random text-to-speech module to be spoken out. The PC-side code for this project was written in Python utilizing third-party RSS and serial I/O libraries, while the Arduino code was written using the Arduino SDK. The video below shows the installation in action.
A more recent exhibit I worked on with Mr. Himsworth was more involved in augmenting outdated electronics with more modern features. First, I rewired an old polygraph machine (photo below) to 'tick' each time there was a new like in the last 20 articles of Mr. Himsworth's Facebook page.
Second, I connected an oscilloscope (photo below) to an Arduino in order to display the semaphore representation of the sequence of characters in a given book.
For both these projects, I designed the necessary circuitry and implemented the necessary Arduino and Raspberry Pi software to run them. I also assisted in tuning and debugging a third piece (photo below) that manipulated a pair running turntables based on the real-time changes of a couple of Wikipedia articles.
Since the end of 2012, I started pursuing drawing and painting as a hobby. Art has always been an interest of mine but, until now, I have only been a spectator. I have been inspired by artists of the Renaissance and Romantic era as well as contemporary concept artists, illustrators, and pixel artists. As such, I have experimented with a broad range of mediums including pencils, charcoals, pen and ink, oils, markers, and digital mediums. While I am still in the process of learning, my aim is to produce artworks that capture light, emotions, and stories with enough realism to be recognizable while retaining broad expressive brush strokes. Below is a gallery of a selection of my paintings and drawings, in no particular order, I have done over the past year.
Another hobby of mine since 2015 is Electronic Dance Music (EDM) production. Below is a Soundcloud playlist of all my music released thus far: