Sohom Sen - ECE PhD Student of the Month - February 2026
Sohom Sen is a PhD student from India who discovered the passion for research during a summer internship at NJIT under the guidance and mentorship of Dr. Tao Han and Dr. Durga Misra. That experience clarified that academia and research aligned better with his goals than the corporate path. He was fortunate to receive an offer to pursue his PhD under Dr. Han, and is now based in the UNICS Lab, FMH 414.
His research focuses on Real-Time Edge-Powered Intelligent Systems leveraging AI—addressing the fundamental challenge of adapting state-of-the-art AI models for edge deployment through knowledge distillation, model compression, and architectural innovations. Rather than simply deploying existing models, he investigates how to intelligently compress and optimize AI systems to operate on resource-constrained hardware while maintaining performance—enabling real-world applications without cloud dependency.
His most recent work, CoachAI, exemplifies this direction. It's a real-time intelligent coaching system for tennis that uses computer vision and multimodal AI to analyze player performance and provide actionable feedback. His team has secured a Provisional Patent and is advancing toward Non-Provisional Patent status. Beyond tennis, he’s exploring how these techniques generalize to other domains, combining his interests in Computer Vision, Graph Neural Networks, and Multi-Agent Systems to create practical, deployable intelligent systems.
What would you say that could be the next big thing in your area of research?
The next frontier, in my view, is adaptive and context-aware edge inference—where models dynamically adjust their computational complexity based on real-time device constraints, input complexity, and application requirements. Right now, we deploy fixed, optimized models. But the real challenge is building systems that learn to balance accuracy, latency, and energy consumption on-the-fly.
Connected to this is multi-modal reasoning at the edge. We've made progress in single-modality systems like CoachAI, but real-world applications demand reasoning across vision, audio, and sensor data simultaneously—all without leaving the device. The compression and orchestration challenges here are significant.
What excites me is the intersection: designing edge systems that are not just optimized, but intelligent about their own resource allocation. This requires advancing knowledge distillation techniques, exploring dynamic neural architectures, and potentially leveraging graph neural networks to model device-to-model compatibility in heterogeneous edge environments. These aren't solved problems, and I'm actively exploring how my work in model compression and real-time systems can contribute to these directions.
You have taken three courses from Date Science. Please share some experience of what you have learned from those courses and how useful they are to your research.
The three Data Science courses were instrumental in shaping how I approach problem-solving. Rather than confining myself to edge AI or computer vision, I've learned to actively search across domains for existing solutions to related problems—then adapt and integrate those ideas into my own work.
These courses broadened my search space significantly. They exposed me to methodologies, datasets, and techniques from different fields that aren't immediately obvious within my primary domain. More importantly, they helped me build genuine relationships with professors across different research areas. These connections have been invaluable—I've had the opportunity to collaborate on their projects alongside my PhD work, which provides perspective shifts, skill development outside my core research, and has resulted in additional research publications that strengthen my academic portfolio.
This cross-domain thinking has directly influenced my research. When designing CoachAI, for instance, insights from other domains informed how we approached knowledge distillation and real-time optimization. Taking these collaborative breaks from my primary research keeps me grounded, prevents tunnel vision, and continuously expands my technical toolkit.
You have a good record of co-authoring papers with your labmates and advisors.Please share some experience of working on collaborative projects and publishing as a team.
I firmly believe research is inherently collaborative—whether with your advisor or labmates. Each person brings distinct expertise, and working together exposes your own weaknesses while strengthening the overall foundation of the work. You learn faster and produce better research.
Over the past three semesters, my relationships with labmates have evolved beyond the typical professional dynamic. They've become genuine friends, which fundamentally changes how we work together. There's a comfort level that enables honest, unfiltered brainstorming. We can challenge each other's ideas without hesitation, iterate quickly, and collectively arrive at better solutions. This trust directly translates to higher-quality research and genuine productivity gains.
When you're not navigating interpersonal friction or professional distance, you can focus entirely on the intellectual problem at hand. The collaborative environment we've built in the lab—one rooted in friendship and mutual respect—removes barriers to creativity and honest feedback. That's where the best ideas emerge, and that's reflected in the papers we publish together.