I am interested in communication and the social world. Our remarkable communicative capabilities underlies the construction of social relationships and organizations, and shape who we are individually and as a species. My research program combines computational modeling and behavioral experimentation to advance our understanding of communication in social context—from the fine details of language structure to the layered meanings in social interaction and the central role of texts in human institutions and law. My work draws on and contributes to a wide range of fields within cognitive science understood broadly, including social and cognitive psychology, economics, linguistics, computer science, and philosophy.
Preprints
(* indicates joint authorship)
- Qian, P., & Ullman, T. Shape guides visual pretense. PsyArXiv. [pdf] [osf]
- Bridgers*, S., Qian*, P., Parece, K., Taliaferro, M., Schulz, L., & Ullman, T. Loopholes: A window into value alignment and the communication of meaning. PsyArXiv. [pdf] [osf]
- Qian, P., & Levy, R. Comprehenders’ error correction mechanisms are finely calibrated to language production statistics. PsyArXiv. [pdf] [osf]
- Parece, K., Bridgers, S., Qian, P., Schulz, L., & Ullman, T. Skirting the sacred: Moral contexts increase the cost of intentional misunderstandings. PsyArXiv. [pdf] [osf]
- Xu, N., Zhang, Q., Zhang, M., Qian*, P., & Huang*, X. On the tip of the tongue: Analyzing conceptual representation in large language models with reverse-dictionary probe. ArXiv. [pdf] [code]
Publications
- Qian, P., Bridgers, S., Taliaferro, M., Parece, K., & Ullman, T. (in press). Ambivalence by design: A computational account of loopholes. Cognition. [pdf] [osf]
- Murthy, S., Parece, K., Bridgers, S., Qian, P., & Ullman, T. (2023). Comparing the evaluation and production of loophole behavior in humans and large language model. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 4010-4025). [pdf]
- Wilcox, E. G., Gauthier, J., Hu, J., Qian, P., & Levy, R. (2022). Learning syntactic structures from string input. In Algebraic Structures in Natural Language (pp. 113-138). CRC Press.
- Qian, P., & Levy, R. (2022). Flexible generation from fragmentary linguistic input. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 8176-8196). [pdf] [code]
- Tucker, M., Eisape, T., Qian, P., Levy, R., & Shah, J. (2022). When does syntax mediate neural language model performance? Evidence from dropout probes. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 5393-5408). [pdf] [code]
- Wang, Y., Hu, J., Levy, R., & Qian, P. (2021). Controlled evaluation of grammatical knowledge in Mandarin Chinese language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 5604-5620). [pdf] [code]
- Qian, P., Naseem, T., Levy, R., & Astudillo, R. F. (2021). Structural guidance for Transformer language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 3735-3745). [pdf] [code]
- Tucker, M., Qian, P., & Levy, R. (2021). What if this modified that? Syntactic intervention via counterfactual embeddings. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 862-875). [pdf] [code]
- Wilcox, E., Qian, P., Futrell, R., Kohita, R., Levy, R., & Ballesteros, M. (2020). Structural supervision improves few-shot learning and syntactic generalization in neural language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 4640-4652). [pdf] [code]
- Wilcox, E. G., Gauthier, J., Hu, J., Qian, P., & Levy, R. P. (2020). On the predictive power of neural language models for human real-time comprehension behavior. In Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci), 2020. [pdf] [code]
- Gauthier, J., Hu, J., Wilcox, E., Qian, P., & Levy, R. (2020). SyntaxGym: An online platform for targeted evaluation of language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (pp. 70-76). [pdf] [code]
- Hu, J., Gauthier, J., Qian, P., Wilcox, E., & Levy, R. (2020). A systematic assessment of syntactic generalization in neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp.1725-1744). [pdf] [code]
- Mollica, F., Siegelman, M., Diachek, E., Piantadosi, S. T., Mineroff, Z., Futrell, R., Kean, H., Qian, P., & Fedorenko, E. (2020). Composition is the core driver of the language-selective network. Neurobiology of Language, 1(1), 104-134. [pdf] [code]
- An, A., Qian, P., Wilcox, E., & Levy, R. (2019). Representation of constituents in neural language models: Coordination phrase as a case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP) (pp. 2888-2899). [pdf] [code]
- Qian, P., Hewitt, L., Tenenbaum, J. B., & Levy, R. (2019). Inferring structured visual concepts from minimal data. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 41). [pdf]
- Futrell, R., Qian, P., Gibson, E., Fedorenko, E., & Blank, I. (2019). Syntactic dependencies correspond to word pairs with high mutual information. In Proceedings of the fifth international conference on dependency linguistics (DepLing, syntaxfest 2019) (pp. 3-13). [pdf] [code]
- Wilcox, E., Qian, P., Futrell, R., Ballesteros, M., & Levy, R. (2019). Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of NAACL-HLT (pp. 3302-3312). [pdf] [code]
- Futrell, R., Wilcox, E., Morita, T., Qian, P., Ballesteros, M., & Levy, R. (2019). Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 32-42). [pdf] [code]
- Qian, P., Qiu, X., & Huang, X. (2016). Analyzing linguistic knowledge in sequential model of sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 826-835). [pdf]
- Qian, P., Qiu, X., & Huang, X. (2016). Investigating language universal and specific properties in word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1478-1488). [pdf]
- Qian, P., Qiu, X., & Huang, X. (2016). A new psychometric-inspired evaluation metric for Chinese word segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp.2185-2194). [pdf]
- Qian, P., Qiu, X., & Huang, X. (2016). Bridging LSTM architecture and the neural dynamics during reading. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (pp. 1953-1959). [pdf]