Bennett, R. E. (2006). Technology and writing assessment: Lessons learned from the US National Assessment of Educational Progress. Annual Conference of the International Association for Educational Assessment, Singapore, IAEA. http://www.iaea.info/documents/paper_1162a26d7.pdf.
Bridgeman, B., Trapani, C., & Yigal, A. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27-40.
Byrne, R., Tang, M., Truduc, J., & Tang, M. (2010). eGrader, a software application that automatically scores student essays: With a postscript on the ethical complexities. Journal of Systemics, Cybernetics & Informatics, 8(6), 30-35.
Chen, C-F. E., & Cheng, W-Y. E. (2008). Beyond the design of automated writing
evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112.
Cheville, J. (2004). Automated scoring technologies and the rising influence of error. English Journal, 93(4), 47-52.
Chodorow, M., & Burstein, J. (2004). Beyond essay length: Evaluating e-rater's performance on TOEFL essays (TOEFL research report, No. RR-04-73). Princeton, NJ: Educational Testing Service.
Cindy, J. (2007). Validating a computerized scoring system for assessing writing and placing students in composition courses. Assessing Writing, 11(3), 167-178.
Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing, 18(1), 100-108.
Council of Writing Program Administrators; National Council of Teachers of English; National Writing ProjectFramework. (2010). Framework for success in postsecondary writing. http://wpacouncil.org/framework.
Deane, P. (2013). On the relationship between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24.
Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012). Placement of students into first-year writing courses. Research in the Teaching of English, 46(3) , 285-313.
Elliott, S. (2011). Computer-graded essays full of flaws. Dayton Daily News (May 24). http://www.daytondailynews.com/project/content/project/tests/0524testautoscore.html.
Frank, L. A. (1992). Writing to be read: Young writers' ability to demonstrate audience awareness when evaluated by their readers. Research in the Teaching of English, 26(3), 277-298.
Herrington, A., & Moran, C. (2001). What happens when machines read our students' writing? College English, 63(4), 480-499.
Herrington, A., & Moran, C. (2006). WritePlacer Plus in place: An exploratory case study. In P. F. Ericsson & R. H. Haswell (Eds.), Machine scoring of student essays: Truth and consequences (114-129). Logan, UT: Utah State University Press.
Herrington, A., & Moran, C. (2012). Writing to a machine is not writing at all. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 219-232). New York, NY: Hampton Press.
Jones, E. (2006). ACCUPLACER's essay-scoring technology: When reliability does not equal validity. In P. F. Ericsson & R. H. Haswell (Eds.), Machine scoring of student essays: Truth and consequences (93-113). Logan, UT: Utah State University Press.
Mattern, K. D., & Packman, S. (2009). Predictive validity of ACCUPLACER scores for course placement: A meta-analysis (Research Report 2009-2). New York: College Board.
Matzen, R. N., & Hoyt, J. E. (2004). Basic writing placement with holistically scored essays: Research evidence. Journal of Developmental Education, 28(1), 2-34.
McCurry, D. (2010). Can machine scoring deal with broad and open writing tests as well as human readers? Assessing Writing, 15(2), 118-129.
McGee, T. (2006). Taking a spin on Intelligent Essay Assessor. In P. F. Ericsson & R. H. Haswell (Eds.), Machine scoring of student essays: Truth and consequences (79-92). Logan, UT: Utah State University Press.
National Governors Association; Council of Chief State School Officers. (2010) Common Core State Standards. http://www.corestandards.org/the-standards.
Perelman, L. (2012a) Construct validity, length, score, and time in holistically graded writing assessments: The case against automated essay scoring (AES). In C. Bazerman, C. Dean, J. Early, K. Lunsford, S. Null, P. Rogers, A. Stansell (Eds.), International advances in writing research: Cultures, places, measures (121-132). Fort Collins, CO: WAC Clearinghouse, Anderson, SC: Parlor Press.
Perelman, L. (2012b). Mass-market writing assessments as bullshit. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 425-438). New York, NY: Hampton Press.
Powers, D. E., Burstein, J., Chodorow, M. S., Fowles, M. E., & Kukich, K. (2001). Stumping e-rater: Challenging the validity of automated essay scoring (GRE Report, No. 98-08bP). www.ets.org/Media/Research/pdf/RR-01-03-Powers.pdf.
Powers, D. E., Burstein, J., Chodorow, M. S., Fowles, M. E., & Kukich, K. (2002). Comparing the validity of automated and human scoring of essays. Journal of Educational Computing Research, 26(4), 407-425.
Quinlan, T., Higgins, D., & Wolff, S. (2009). Evaluating the construct-coverage of the e-rater scoring engine. Princeton, NJ: Educational Testing Service.
Rafoth, B. A. (1985). Audience adaptation in the essays of proficient and nonproficient freshman writers. Research in the Teaching of English, 19(3), 237-253.
Ramineni, C., & Williamson, D. M. (2013). Automated essay scoring: Psychometric guidelines and practices. Assessing Writing 18(1), 25-39.
Rubin, D. L., & O'Looney, J. (1990). Facilitation of audience awareness: Revision processes of basic writers. In G. Kirsch & D. H. Roen (Eds.), A sense of audience in written communication (pp. 280-292). Newbury Park, CA: Sage.
Streeter, L., Psotka, J., Laham, D., & MacCuish, D. (2002). The credible grading machine: Automated essay scoring in the DoD. In Proceedings of the Interservice/Industry Training, Simulation and Education Conference, 2002. Orlando, FL: I/ITSEC.
Wang, J., & Brown, M. S. (2008). Automated essay scoring versus human scoring: A correlational study. Contemporary Issues in Technology and Teacher Education, 8(4). http://www.citejournal.org/vol8/iss4/languagearts/article1.cfm.
Wohlpart, J., Lindsey, C., & Rademacher, C. (2008). The reliability of computer software to score essays: Innovations in a humanities course. Computers and Composition, 25(2), 203-223.
Wolmann-Bonilla, J. E. (2000). Teaching science writing to first graders: Genre learning and recontextualization. Research in the Teaching of English, 35(1), 35-65.