Research
Academic Publications
TEEMIL: Towards Educational MCQ Difficulty Estimation in Indic Languages
The 31st International Conference on Computational Linguistics (COLING 2025)
Ravikiran, M., Vohra, S., Verma, R., Saluja, R., & Bhavsar, A. (2025). TEEMIL: Towards Educational MCQ Difficulty Estimation in Indic Languages. The 31st International Conference on Computational Linguistics (COLING 2025). ACL Anthology.
You Reap What You Sow—Revisiting Intra-class Variations and Seed Selection in Temporal Ensembling for Image Classification
International Conference on Frontiers in Computing and Systems (COMSYS 2023)
Ravikiran, M., Vohra, S., Nonaka, Y., Kumar, S., Sen, S., Mariyasagayam, N., & Banerjee, K. (2023). You Reap What You Sow—Revisiting Intra-class Variations and Seed Selection in Temporal Ensembling for Image Classification. In Proceedings of International Conference on Frontiers in Computing and Systems (pp. 73-82). Springer, Singapore.
(Manikandan Ravikiran and Siddharth Vohra—Both authors contributed equally. Names are ordered alphabetically)
Investigating the Effect of Intraclass Variability in Temporal Ensembling
arXiv preprint (2020)
Vohra, S., & Ravikiran, M. (2020, August 21). Investigating the effect of intraclass variability in temporal ensembling. arXiv.org
Technical Blogs
Using WAF with App Runner in Copilot
February 23, 2023
Vohra, S. (2023, February 23). Using WAF with app runner in copilot. AWS Copilot CLI Blog
Research Experience
Independent Research
December 2023 - Present
- Currently challenging the multilingualism and reasoning capabilities of Large Language Models (LLMs) through independent research
- Collaborating with former Hitachi R&D colleagues on research projects focused on multilingual education and LLM evaluation
- Project accepted to COLING 2025, focusing on enhancing LLM reasoning on multiple-choice questions based on educational texts in Indian regional languages (Hindi & Kannada)
- Providing baselines for difficulty estimation in these languages using various natural language processing techniques
- Setting up and evaluating experiments using various LLMs including GPT, Claude, Llama, and Gemma
- Applying advanced quantization techniques like LoRA and QLoRA to improve model performance and efficiency
- Leading a project to benchmark coding question difficulty using LLMs with the goal of developing a BERT-like model for assessing programming problem complexity
Hitachi Research & Development
June 2020 – August 2020
Artificial Intelligence (AI) / Machine Learning (ML) Research Intern (Pro-Bono)
Bengaluru, India
- Worked on various Computer Vision projects at Hitachi R&D center
- Benchmarked the Temporal Ensembling Model across diverse datasets, analyzing intraclass variability impacts
- Set up experiments, selected appropriate datasets, collaborated with senior researchers, and interpreted results
- Explored computer vision techniques for pedestrian attribute recognition and person re-identification
- Tested different algorithmic approaches and neural network architectures to improve identification accuracy in security applications
- Technologies used: Python, PyTorch, Google Colaboratory (for cloud GPU-based model training)