Request for Information (RFI): Evaluation of Neural Text Generation Models and Methods in Explainable NLP
IARPA is seeking information on established techniques, metrics and capabilities related to the evaluation of generated text and the evaluation of human-interpretable explanations for neural language model behavior. This RFI is issued for planning purposes only, and it does not constitute a formal solicitation for proposals or suggest the procurement of any material, data sets, etc. The following sections of this announcement contain details on the specific technology areas of interest, along with instructions for the submission of responses.
Background and Scope:
Neural language models (NLMs) have achieved state-of-the-art performance on a wide variety of natural language tasks. In natural language generation in particular, models such as GPT-3 have produced strikingly human-like text.  Methods to evaluate and explain these technologies have not kept pace with the technologies themselves.
Language generation models can be used for a variety of automated tasks involving modification of a pre-existing text, such as paraphrasing, style transfer, summarization, etc. Measuring success on these tasks can be challenging: a modified text must remain faithful to the meaning of the text from which it is derived (i.e., maintaining sense), while also exhibiting human-like fluency (i.e., soundness). Although numerous automated techniques for evaluating sense and soundness have been developed, techniques that require humans to grade generated text (e.g., with Likert scales or ranking) remain the gold standard. 
Furthermore, as language generation models increasingly produce human-like content on the internet, there is growing interest from diverse stakeholders in capabilities to flag artificially generated text content, in its many varieties. As is the case in other text classification tasks, NLM classifiers have seen success in identifying machine generated text; however, it is difficult to derive explanations for the predictions of multi-layer neural models, and the human user’s inability to understand and trust the rationale underpinning individual model predictions places limits on a system’s potential use cases. 
There is a growing body of explainable NLP techniques, but many of the proposed methods for text classifier models involve delineating spans of input text that a model ‘attends’ to when
predicting a label.  A shortcoming of span-level explanations is that they do not identify actual linguistic features or structures (syntactic, morphological, discourse-level, etc.) even though there is a growing body of evidence that NLMs encode these aspects of Human Language. 
Evaluating explanations presents an additional challenge, especially for text classification tasks, such as machine generated text detection, where it is difficult to produce ground truth annotations. Unlike sentiment classification, where an annotator can generally identify which spans in a sentence express positive or negative sentiment, humans do not have clear intuitions about the kinds of features differentiate human vs. machine generated text.  Explainable NLP datasets have focused on tasks where humans have strong intuitions about the correct explanations for ground truth labels.  Lack of ground truth datasets poses a challenge for evaluating explanations (though performance on downstream tasks that rely on explanations can serve as a non-ideal proxy for explanation quality).
The purposes of this RFI are the following:
• Identification of novel human or automatic techniques, metrics and capabilities for evaluating the sense and soundness of machine modified text
• Identification of novel methods to derive human-interpretable explanations from NLM text classifiers
• Identification of novel techniques for measuring the quality of local explanations derived from NLMs
Responses to this RFI should answer any or all of the following questions:
1. Does your organization currently use language generation technology for tasks involving the modification of text (paraphrasing, style transfer, summarization, etc.)?
2. Has your organization researched or implemented novel techniques to evaluate the sense and soundness of machine modified text? (Novel techniques may include human or automated methods, but may not include techniques enumerated in .) Please describe these capabilities and provide relevant references.
3. Does your organization currently employ novel methods to derive human-interpretable explanations from NLM text classifiers (methods not described in explainable NLP survey papers such as )? Please describe these capabilities and provide relevant references.
• Could these techniques be applied to the problem of detecting automatically generated or machine-modified text? Please explain.
4. Does your organization have techniques or metrics for evaluating the quality of human-interpretable explanations for NLM text classifiers? Please describe these capabilities and provide relevant references.
Preparation Instructions to Respondents:
IARPA requests that submissions briefly and clearly describe the approach or capability, directly address any or all of the specific questions, and outline any known critical technical issues/obstacles. This announcement contains all of the information required to submit a response. No additional forms, kits, or other materials are needed.
IARPA welcomes responses from all capable and qualified sources from within and outside of the U.S.
Because IARPA is interested in an integrated and diverse approach, responses from teams with complementary areas of expertise are encouraged.
Submissions from Federally Funded Research and Development Centers (FFRDCs) and University Affiliated Research Centers (UARCs) are permitted but with an understanding that neither group is able to propose against any IARPA program. Instead, any submissions from these groups should consider the technical elements described above but include reflection upon how they would support program efforts as a potential test and evaluation partner enabling IARPA to validate different potential approaches to meet the research challenge.
Responses have the following formatting requirements:
1. A one-page cover sheet that identifies the title, organization(s), respondent's technical and administrative points of contact – including names, addresses, phone and fax numbers, and email addresses of all co-authors, and clearly indicating its association with RFI-22-01;
2. A substantive, focused, one-half page executive summary;
3. Answers to the above questions including potential research approaches capable of achieving a potential program on this topic limited to 5 pages in minimum 12-point Times New Roman font, appropriate for single-sided, single-spaced 8.5 by 11-inch paper, with 1-inch margins);
4. A list of citations (any significant claims or reports of success must be accompanied by citations);
5. Optionally, a single overview briefing chart graphically depicting the key ideas;
6. An appendix of critical reference papers or white papers (no more than 3) associated with answers or potential approaches.
7. Identify any risks of unfair bias and discrimination that can occur through the data itself or through the human bias within the workforce programming. Identify risk mitigations such as removal of any personal characteristics that cannot be objectively justified for use, with particular care over protected characteristics to ensure the outputs are free from unfair bias and prejudice, whether conscious or unconscious.
Submission Instructions to Respondents:
Responses to this RFI are due no later than 5 p.m., Eastern Time, on 12/10/2021. All submissions must be electronically submitted to firstname.lastname@example.org as a PDF document. Inquiries to this RFI must be submitted to email@example.com. Do not send questions with proprietary content. No telephone inquiries will be accepted.
Disclaimers and Important Notes:
This is an RFI issued solely for information and planning purposes and does not constitute a solicitation. Respondents are advised that IARPA is under no obligation to acknowledge receipt
of the information received or provide feedback to respondents with respect to any information submitted under this RFI.
Responses to this notice are not offers and cannot be accepted by the Government to form a binding contract. Respondents are solely responsible for all expenses associated with responding to this RFI. IARPA will not provide reimbursement for costs incurred in responding to this RFI. It is the respondent's responsibility to ensure that the submitted material has been approved for public release by the information owner.
The Government does not intend to award a contract on the basis of this RFI or to otherwise pay for the information solicited, nor is the Government obligated to issue a solicitation based on responses received. Neither proprietary nor classified concepts or information should be included in the submittal. However, should a respondent wish to submit classified concepts or information, prior coordination must be made with the IARPA Chief of Security. Email the Primary Point of Contact with a request for coordination with the IARPA Chief of Security.
Input on technical aspects of the responses may be solicited by IARPA from non-Government consultants/experts who are bound by appropriate non-disclosure requirements. Submissions may be reviewed and followed up on by an assigned technical contractor supporting the designated IARPA POC.
Several key laws, enacted over the past three decades, provide general privacy and confidentiality requirements that either directly or indirectly affect all government agencies. These include The Privacy Act of 1974, The Computer Security Act of 1987, Health Insurance Portability and Accountability Act of 1996 (HIPAA), US Patriot Act of 2001, and The Confidential Information Protection and Statistical Efficiency Act of 2002. Under federal law, protected characteristics include race, color, national origin, religion, gender (including pregnancy), disability, age (if the employee is at least 40 years old), and citizenship status. Processing any of these data sets could inadvertently cause discrimination or bias, of these protected characteristics.
Federal laws and regulations that mandate protections for the privacy of citizens are applicable to the use of geospatial data. The Office of Management and Budget (OMB) states in “Circular A-16 Revised: Coordination of Geographic Information and Related Spatial Data Activities” that geographic and spatial data must not compromise the privacy and the security of personal data about citizens.
Contracting Office Address:
Office of the Director of National Intelligence
Intelligence Advanced Research Projects Activity
Washington, District of Columbia 20511
Primary Point of Contact:
Dr. Tim Mckinnon
Program Manager – Office of Analysis Research
Intelligence Advanced Research Projects Activity
1. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS) 33.
2. Celikyilmaz, A., Clark, E., & Gao, J. (2020). Evaluation of text generation: A survey. arxiv: 2006.14799, 2021
3. Turek, M. (2018) Explainable Artificial Intelligence (XAI) [online] Available: https://www.darpa.mil/program/explainable-artificial-intelligence.
4. Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., & Sen, P. (2020). A survey of the state of explainable AI for natural language processing. arXiv preprint arXiv:2010.00711.
5. Rogers, A., Kovaleva, O., & Rumshisky, A. (2020). A primer in bertology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8, 842-866.
6. Gehrmann, S., Strobelt, H., & Rush, A. M. (2019). GLTR: Statistical detection and visualization of generated text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
7. Wiegreffe, S., & MarasoviÄ‡, A. (2021). Teach me to explain: A review of datasets for explainable NLP. arXiv preprint arXiv:2102.12060.