Anish Krishnan

Carnegie Mellon University

Computer Science and Information Systems
agkrishn@alumni.cmu.edu

Home Projects Resume Github

...

Projects


... Echo: Penn Apps Winner (2nd Overall and won 5 other awards) 02/19

Echo is an intelligent, environment-aware smart cane that acts as assistive tech for the visually or mentally impaired.

Using cameras, microphones, and state of the art facial recognition, natural language processing, and speech to text software, Echo is able recognize familiar and new faces allowing patients to confidently meet new people and learn more about the world around them. Echo also has a button that, when pressed, will contact the authorities- this way, if the owner is in danger, help is one tap away.

DevPost Link

... Syne: HackCMU Winner 09/18

Syne is a tensorflow-based sign language processing system that allows mute people to efficiently communicate with the outside world.

This system gathers (x, y, z) coordinates of each of the 15 joints in our hands using a leap motion sensor, which we then map to 3D vectors relative to the position of our palm. These vectors were normalized in order to ensure that the size of the hand doesn’t affect the accuracy of the system. Our neural network takes in the 45 data points, and, using three dense hidden layers, categorizes the hand gesture as one of the 26 letters. This model was trained with thousands of readings for each letter, and while training, received a validation accuracy of over 99%.

DevPost Link

... Syft: CMU Tartan Hacks Winner 02/18

Syft lets you practice speeches on virtual stages (using VR) so that you can overcome stage fright. It even uses Google's Natural Language Processing (NLP) to analyze your speech and give you feedback!

Using Microsoft Azure and Bing's Speech to Text API alongside Google's NLP API, Syft is able to create an aggregate of the linguistic data from the user's speech to create a more accurate evaluation of the speaker. Syft takes advantage of the power of VR to place the user in a stage with an active crowd, so that the speaker can practice his/her speech in a more accurate environment. Syft analyzes your speech in real time looking specifically for stutters, rhythm, project, vigor, and even applies sentiment analysis.

DevPost Link

... Air DJ: Innovative Music Creation Platform 11/17

Air DJ is an intuitive new method of convolving your music with your hands without the use of a keyboard or mouse. This app utilizes the Leap Motion Sensor.

Air DJ allows the user to play tracks and add various beats to a song all controlled with hand gestures.

...
Echo: Penn Apps Winner (2nd Overall and won 5 other awards)
02/19

Echo is an intelligent, environment-aware smart cane that acts as assistive tech for the visually or mentally impaired.

Using cameras, microphones, and state of the art facial recognition, natural language processing, and speech to text software, Echo is able recognize familiar and new faces allowing patients to confidently meet new people and learn more about the world around them. Echo also has a button that, when pressed, will contact the authorities- this way, if the owner is in danger, help is one tap away.

DevPost Link

...
Syne: HackCMU Winner
09/18

Syne is a tensorflow-based sign language processing system that allows mute people to efficiently communicate with the outside world.

This system gathers (x, y, z) coordinates of each of the 15 joints in our hands using a leap motion sensor, which we then map to 3D vectors relative to the position of our palm. These vectors were normalized in order to ensure that the size of the hand doesn’t affect the accuracy of the system. Our neural network takes in the 45 data points, and, using three dense hidden layers, categorizes the hand gesture as one of the 26 letters. This model was trained with thousands of readings for each letter, and while training, received a validation accuracy of over 99%.

DevPost Link

...
Syft: CMU Tartan Hacks Winner
02/18

Syft lets you practice speeches on virtual stages (using VR) so that you can overcome stage fright. It even uses Google's Natural Language Processing (NLP) to analyze your speech and give you feedback!

Using Microsoft Azure and Bing's Speech to Text API alongside Google's NLP API, Syft is able to create an aggregate of the linguistic data from the user's speech to create a more accurate evaluation of the speaker. Syft takes advantage of the power of VR to place the user in a stage with an active crowd, so that the speaker can practice his/her speech in a more accurate environment. Syft analyzes your speech in real time looking specifically for stutters, rhythm, project, vigor, and even applies sentiment analysis.

DevPost Link

...
Air DJ: Innovative Music Creation Platform
11/17

Air DJ is an intuitive new method of convolving your music with your hands without the use of a keyboard or mouse. This app utilizes the Leap Motion Sensor.

Air DJ allows the user to play tracks and add various beats to a song all controlled with hand gestures.