Hi - my name is Peri and I am a PhD student at Rutgers Vision Lab Research Group, where I am advised by Kristin Dana. My current research investigates methods for scene understanding and segmentation of images and videos that minimize the requirment of human labeling. So far this has been achieved through object specific known priors, domain adaptation, and data synthesis.
My computer vision journey began at WINLAB on June 2016 where I built a robot able to fetch items in a pre-mapped enviroment and return to its charging station. During my final year of undergraduate studies, WINLAB also sponsored my capstone design project (4th place winner) which aimed to detect living being forgotten in hot vehicles and alert the vehicle's owner. I then joined Goldman Sachs for an applied machine learning summer internship role where I developed a large scale hierarchical document classifier to improve the knowledge management system. From 2018-2020 I interned at Siemens Corporate Research where I had the opportunity to work on feature representation learning in depth data, publishing ViewSynth at BMVC2020. In the summer of 2020 I was fortunate to work with Samsung Research America on exciting research in the topics of frame interpolation and extrapolation, and collaborate with Cloud to Street on a flood segmentation in remote sensing imagery project, publishing H20-Net at WACV2021. The full list of publications and work experience can be found on my Google Scholar page and CV.
I actively engage in corporate collaborations, consulting, and internships. Feel free to reach out to me if you have questions or inquiring about working together or hiring me for a project.