I'm a graduate student at Carnegie Mellon University (CMU) pursuing a M.S. degree in Robotic Systems Development. I earned my B.S. degree from Carnegie Mellon University as well, majoring in Electrical & Computer Engineering and Robotics. My primary area of interest is in the application of machine learning, artificial intelligence and planning techniques in robotic systems. I'm fluent in Python, C, C#, MATLAB and have experience in Windows application development, web development and mobile robot systems development from my previous internships at Microsoft and Comcast. I'm an international student who grew up in Singapore. As a result, my first language is English, but I can also speak Mandarin, Hindi and Tamil to varying degrees of fluency. The diverse population I grew up amongst exposed me to a multitude of cultures and ideas, and I continue to carry the ability to work with vastly different people.
I built an automated robotic platform prototype that streamlines certain workflows for a 3D spatial understanding system. This includes tasks like multi-camera calibration and 3D object localization for large indoor spaces. My automated solution significantly reduces the cost, setup time and human labor required to perform the same tasks while increasing the robustness and versatility of the system. The engineering process involved both mechanical design and software development in Python.
I worked on a feature that adds virtual reality (VR) functionality to the Remix3D.com catalog. I used technologies such as WebVR and BabylonJS to implement my feature. Currently, the catalog only allows users to view 3D content through a 2D interface (i.e. a screen). However, my feature will now enable users to experience 3D models using VR headsets - exactly how the content creators intended. I expect my feature to drive up content publishing and increase the satisfaction of content consumers due to the improved browsing experience.
I built a C# Windows application that monitors signals from Windows Energy-Estimation-Engine (E3) in real time and displays it to the user using modern UI visuals. Additionally, the tool generates an Event Trace Log (ETL) file that can later be viewed using existing tools like Windows Performance Analyzer (WPA). With battery life becoming an increasingly critical factor in a product's success, it is essential for software engineers to understand the impact of their code on power consumption. Developers will be able to use this tool to easily understand how their code affects power consumption while it is running.
I created a website for Comcast’s TDS team that allowed optimization of operations. Using the MEAN stack (MongoDB, Express.js, Angular.js, Node.js), I created a system to record information about all ETL jobs and DevOps failures that were encountered. This will reduce time taken for new employees to get started on jobs they’re unfamiliar with. I also created a dashboard that monitored the health and performance of the eight servers used by the team. This involved writing and executing Bash scripts remotely on each of the servers and will allow the team to track performance trends over time and optimize their load-balancing algorithms.
I am developing a heterogeneous autonomous beach cleanup system called Sureclean with 3 of my peers. The system consists of an unmanned aerial vehicle (UAV), unmanned ground vehicle (UGV) and a central server. Sureclean should not only be cheaper than existing beach cleanup systems, but it's also more environmentally friendly than the large tractors used today. I wrote a computer vision pipeline in Python that detected litter from aerial images. I also helped test our localization and path planning subsystems that were integrated with Robot Operating System (ROS) for multi-agent communication. Aside from the engineering development of Sureclean, I also played a key role in the project management, which included defining a use case, eliciting user requirements and designing the system architecture. I also co-wrote and presented Conceptual Design Report, Preliminary Design Review and Critical Design Report, all of which included detailed budget, schedule and risk planning. The project is scheduled to be completed in December 2019.
I implemented a 3-class weather classification pipeline in MATLAB. The system classified single dashcam images into one of three weather classes: Sunny, Rainy or Foggy. The methods used were built on prior literature in the field and used both color- and frequency-based features. These features are fed into a linear decision boundary and a Support Vector Machine (SVM) as part of a multi-step classification pipeline. The final system was able to achieve an overall accuracy of over 90%. Many computer vision algorithms assume friendly atmospheric conditions and may not perform in adversarial conditions such as heavy rain or fog. For applications such as autonomous vehicles, the consequences can be fatal. Therefore, an accurate weather classification pipeline can help such systems account for these conditions e.g. by changing the models they use.
I programmed a robot arm using Python on a team of 4 to play a game of chess. The robot, which had 5 revolute joint and 2 end effector prismatic joints, would take chess move commands from the command-line and execute them on a physical chessboard. We had to implement every aspect of this task, including forward kinematics, inverse kinematics and motion planning. Specific challenges in this project included planning straight-line trajectories in the task space to avoid collisions and compensating for gravity and backlash in the motors.
I built an autonomous robotic ball boy in a group of 3. Our robot was designed to move autonomously on a tennis court and retrieve a tennis ball when a point had ended. We had a global camera that was tracking all the objects of interest on the court, which was used as our localization mechanism. Our mobile robot had a ball intake mechanism that could easily take in and release a tennis ball. I wrote the computer vision algorithm in C++, which took in a video feed from a USB webcam in real-time and identified the location of the tennis ball and robot in the scene. I also learned how to plan this large project from start to finish, which included creating a schedule, budget and detailed documentation of our system architecture and design decisions.
I created a pedestrian detection system on a team of 3 that performed real-time analytics. The video scene we used was of people walking through a shopping area (called the Town Center dataset). Using the output from our pedestrian detection algorithm, we were able to generate data highlighting which stores received more foot traffic over the course of the video. For instance, this data could be used by shopping mall managers to dynamically set rent for specific store plots depending on the expected volume of traffic. I implemented the computer vision feature extraction pipeline in MATLAB and C. The feature we used was a Histogram of Gradient (HOG), which has been proven to be effective at object detection. The HOG features output by my code were passed into a linear SVM, which classified it as a person or not-a-person. Our final project involved implementing this system on a Field Programmable Gate Array (FPGA), which allowed us to increase the speed of our algorithm through hardware optimizations.
I was a developer on the Student Dormitory Council Booth Committee's Game Design team in 2016, 2017 and 2018. In 2016, we created a Battleship game in Python using Tkinter. This game was supposed to be played by visitors to our booth during CMU's Spring Carnival. For various reasons, we were not able to fully complete the game, but I learned a lot from this experience and it has made me a better software developer and team leader. In 2017, I was the leader of the Interactions Committee, which was responsible for creating an immersive user experience through technology. This role taught me how to plan a diverse, large-scale technical project over the course of 6 months, culminating in an 18ft x 8ft wooden booth. Our theme was Mars Colony, so our entire booth was designed as a museum on Mars set 500 years in the future. Through creative use of lighting, sound, visuals, holograms and space, we created an engaging, though-provoking experience for our visitors. And finally in 2018, I continued to lead the committee and we built 3 very well-recieved carnival-style games. Our robust, functional and extremely engaging games attracted over 1000 visitors to our booth over the course of the weekend. We used impact sensors, Arduino UNOs, DMX lights and motors to create the game as you can see below.
I wrote an algorithm on a team of 3 that performed Question & Answering based on a Wikipedia article. The system used natural language processing (NLP) techniques to parse the raw text of the article and generate sensible questions. These questions were broken down into different categories such as True/False, When, Where, Who, What etc. The system was also capable of answering questions it received based on the information in the article. I wrote the pipeline that ingested the article's raw content and generated the questions. My implementation involved combining a variety of NLP techniques such as regexes, language models, smoothing, constituency parsers, dependency parsers and named-entity-recognition. I leveraged the Python framework Spacy, as well as Stanford Core NLP's Python API to make use of these resources.
I programmed a robot arm using MATLAB to stack a jenga tower. The robot, which had 5 revolute joints, would pick up each jenga piece, one-by-one, and place it on a flat surface, gradually building a jenga tower. My robot was consistently able to build a tower that was 6 stories high in under 5 minutes, with no single piece being more than 1cm out of place. The code to perform this task required a combination of forward kinematics, inverse kinematics, motion planning and waypoint tuning.
I was nominated and selected for a position on CMU's Student Board. Members of the Board serve on the Academic Review Board and University Disciplinary Committee. Along with faculty and staff, we deliberate on cases that involve violations of academic integrity and/or community standards, policies and procedures. Working closely with faculty and staff, it is my job to carefuly consider each case and provide my perspectives as a student. From this experience, I have learned how to arrive at a balanced judgment incorporating all viewpoints and information presented, as well as how to incorporate my peers' viewpoints in making my own decisions.
I am a teaching assistant for the Introduction to Robotics class in CMU’s Robotics Institute. My responsibilities include designing and running two labs for a class of 60 each year. My labs included tasks such as line following, dead-reckoning, ladder climbing and path planning. Apart from labs, I grade exams and hold regular office hours, where I assiste students with homework that frequently includes programming in C, Python and MATLAB. The topics I am expected to be proficient in include computer vision, kinematics and dynamics, controls, path planning and localization.
I was the leader of the technology/interactions subcommittee for SDC Booth for two years. In 2016/2017, I oversaw a team of 8-10 people, where we developed all the lighting and sound for a "museum experience". Since this was my first year leading a large technical project, I erred on the side of delegation and let my team members work on their tasks and use their judgment when making key decisions. While the entire project turned out quite well, I realized that this approach resulted in a few minor glitches in our final system. Because people worked relatively independently of each other, the lack of communication meant that the timing of the lighting and sound were slightly off, which became noticeable after our audio loop was run over the course of a few hours. In 2017/2018, I stayed on as the leader of this group and changed my approach to ensure such issues were avoided. This time around, I participated a lot more in the design process, and even took ownership over the implementation of the game logic and prototype. I also took the time and effort to get to know my team members' strengths and weaknesses, which was incredibly helpful for me when I was assigning roles to individuals. In the end, our final system was a huge improvement over what we produced the previous year. All our systems (lighting, sound, mechanics) were cohesive and well-integrated to produce an experience that was thoroughly enjoyed by our visitors. This was a great learning experience for me and has made me a better team leader.
I was the Chair of the Housing & Student Life (HSL) Committee. As SDC's largest committee, it acts as a liaison between CMU's Housing Services and Student Life Office. I managed and delegated tasks to 7 committee members, most of whom were initially new to the organization. By facilitating leadership opportunities and encouraging my committee members to use their initiative, I was able to quickly energize the committee, optimize our old operations and kick-start our new ones. As a result of this experience, 4 out of my 7 committee members were promoted to higher leadership positions in SDC.