My name is Muhammad Khalifa. I am a first-year PhD student at the University of Michigan in Ann Arbor. I am fortunate to be advised by Lu Wang and Honglak Lee. My main research interests are Reasoning, Representation Learning, and Few-Shot Learning. Previously, I did my master’s in the computer science department at Cairo University, where I focused on Low-resource multi-dialectal Arabic NLU. I spent 10 months at Amazon AI working with Miguel Ballesteros and Kathy Mckeown on multiple projects including Dialogue Summarization and Semi-structured documents understanding. Prior to that, I was an intern at Naver Labs Europe where I worked on Controllable Text Generation and Energy-based models with Hady Elsahar and Marc Dymetman.

On a side note, I’m an avid reader of psychology, philosophy, and an all-time listener of uplifting trance music. In my free time, I play the piano, write and produce my own music. If you’d like to chat about research, mentoring, or potentially collaborate, ping me at [lastname]

( Twitter / LinkedIn / Scholar / Github / CV )


Nov 30th, 2022: I was awarded the Rackham Fellowship for Outstanding International Students as a PhD precandidate!! Oct 11th, 2022: New preprint on contrastive training for semi-structured document classification!

July 10th, 2022: Attending NAACL 2022 in Seattle. Reach out if you want to have a chat!

May 24th, 2022: New preprint on unsupervised Multi-hop reranking with large language models!

May 3rd, 2022: Won best poster award in UMich NLP Day on my ICLR 2021 work competing with over 20 other posters!

August 26th, 2021: Two papers accepted into EMNLP ‘21 and HICSS-55!!

August 7th, 2021: Successfully defended my masters thesis!!

March 1st, 2021: My internship at Amazon AI was extended. Excited to be working with Yogarshi Vyas and Shuai Wang on zero-shot classification and semi-structured language models pre-training.

Jan 12th, 2021: Our paper on “A Distributional Approach To Controlled Text Generation” was accepted to ICLR 2021 (Top 2.2% of submissions and Oral Presentation). [Paper] [Code] [Blog]

October 12th, 2020: Started an applied scientist internship at Amazon AI, working with Miguel Ballesteros and Kathleen Mckeown.

Research Highlights

Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang. 2022. LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA. In submission. [Preprint]

Muhammad Khalifa, Miguel Ballesteros, and Kathleen McKeown. 2021. A Bag of Tricks for Dialogue Summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8014–8022, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. [Paper].

Muhammad Khalifa, Muhammad Abdul-Mageed, Khaled Shaalan. “Self-Training Pre-Trained Language Models for Zero-and Few-Shot Multi-Dialectal Arabic Sequence Labeling.” In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021 (pp. 769–782). Association for Computational Linguistics. [Paper] [Code] [Bibtex]

Muhammad Khalifa*, Hady Elsahar*, Marc Dymetman*. “A Distributional Approach to Controlled Text Generation”. In International Conference on Learning Representations 2021. [Paper] [Code] [Blog]

Muhammad Khalifa, Khaled Shaalan. “Character Convolutions for Arabic Named Entity Recognition with Long Short-Term Memory Networks”. In Speech & Language, Volume 58, 2019, Pages 335-346, ISSN 0885-2308. [Paper]