Jules Gagnon-Marchand

Master’s Student (Thesis) at Mila and McGill in Montreal.

Background in mathematics and computer science.

Master’s Sup.: Pr Jackie Cheung, Mila / McGill, Cifar AI Chair. Link.

GPA: 4.0 / 4.0

Professional Experience:

      Google Brain with Noam Shazeer – Research internship, Mountain View office (remotely).
-     Large scale language models, language generation and retrieval.
      Google AI – NLP branch, Research Internship, Googleplex (Mountain-View):

-     Builtan End-to-end Neural Conversational Model for Google Assistant – Shopping

o    Imagined & pitched the project, organized data collection, Implemented and iterated on the models. Got very promising results.

-     Also built a salient term representation learning system building on BERT, for conversational shopping assistants.

      Huawei AI Research – NLP Team, Research Internship:

-     Worked on improving Generative Adversarial models for Text.

-     Worked on Unsupervised Translation and adversarial losses in the context of text generation.

      Autodesk Research, Applied Research Internship:

-     Delivered a solution: 3D Deep Learning Computer Vision: Semantic Segmentation in 3D Point Cloud Scenes, exploiting the sparsity of 3D space.

Prizes:

   CSGames 2015: First place in machine learning, Second place in classical AI and Third place in parallelism. (One podium for each category I was in.)

   First place, Fighting Fake News with AI competition by ElementAI. Made the mainstream press.

Publications:

      “SALSA-TEXT: Self-Attentive Latent Space-Based Adversarial Text Generation”, Canadian Conference on AI 2018, Jules Gagnon-Marchand, Hamed Sadeghi, Md. Akmal Haidar, Mehdi Rezagholizadeh.

      “Monitoring Neuromotricity Online: A Cloud Computing Approach”, 17th Conf. of the International Graphonomics Society, 2015, by Olivier Lefebvre, Pau Riba, Jules Gagnon-Marchand, Charles Fournier, Alicia Fornes, Josep Llados, Réjean Plamondon:

Algorithms were previously developed for extracting lognormal parameters that describe handwriting movements. The study of the evolution of these parameters over different periods of time allows the monitoring changes in a user’s neuromotor skills. We explore cloud-based approaches for this purpose.

Talks Given:

      @ Mila “A Review of Dense Passage Retrieval forOpen Domain NLP” (2021)

      @ Mila “A Review of Non-MLE-Based TextGeneration Approaches” (2021)

      @ Google AI: “Conversational Shopping Assistant” (2019) Slides here.

      @ Google AI, for the ShopTalk group: “GANs, GAN Stability, Conditional GANs and Cycle GAN” (2019)

      “Latent Space Based Text Generation Using Attention Models” (2018), Chosen to represent Huawei Research Montreal at the Natural Language Processing Workshop for Yoshua Bengio’s MILA Sponsors and Partners, http://www.crm.umontreal.ca/2018/Langue18/progNLP_en.html

      @ Huawei Research: “Spectral Normalization: The State of the Art in GAN Stabilization” (2018)

      @ Autodesk Research: “Deep Learning for Semantic Segmentation of 3D Scenes” (2017)

Related Keywords:

Neural networks, NN, artificial intelligence, AI, natural language processing, NLP, deep learning, DL, machine learning, ML, data structures, question answering, open domain question answering, NLU, natural language understanding, NLG, natural language generation, algorithmic complexity, big-O, computational linguistics, generative adversarial networks, convolutional neural networks, CNN, recurrent neural networks, RNN, long short term memory, LSTM, sequence learning, time series, regression, medical imaging, medical imagery, EEG, neural machine translation, NMT, seq2seq, sequence to sequence, conversational models, research