Second Call for Papers: 3rd International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (LegalAIIA 2023)

Co-located with the
19th International Conference on Artificial Intelligence and Law (ICAIL 2023)

LegalAIIA 2023 will be a hybrid workshop held on Monday, June 19th, in Braga, Portugal, in conjunction with the 19th International Conference on Artificial Intelligence and Law (ICAIL 2023). As a hybrid workshop, LegalAIIA will be conducted both in-person and virtually (online).
https://sites.google.com/view/legalaiia2023/

This workshop will provide a platform for examining questions surrounding Artificial Intelligence (AI) and Intelligent Assistance (IA), also known as Augmented Intelligence, for legal tasks, including those related to legal practitioners’ interaction with digital information. The focus of the workshop will be on better understanding the interaction between AI/IA and human capabilities.

Over the past two decades, the growing use of machine learning (ML) and other artificial intelligence technologies has significantly increased legal professionals’ abilities to efficiently access, process, and analyze digital information. AI breakthroughs continue to improve everything from advanced search to information extraction and visualization, and from data summarization, classification, and review to the automation of legal-services tasks. At the same time, concerns over transparency and agency as well as the potential limitations and risks of fully automated approaches to problems in the legal space have led to an upsurge in interest in methods that incorporate human intelligence—the human-in- the-loop approach to AI. The debate over using AI as a replacement for humans, as opposed to an augmentation of human abilities (otherwise known as Intelligent Assistance or “IA”), has never been greater, especially given the current fervor over the release of Deep Learning (DL) and Large Language Model-based applications (LLMs) like ChatGPT (via GPT-3.5/4.0) and Bard. In terms of these next generation applications, one question to be asked is: under what conditions will legal practitioners rely on these powerful underlying models in the future as AI-enabled independent agents, or, by contrast, as intelligent assistants, to bolster their own professional skills rather than replace them. Will they be used to write an attorney’s legal brief, or, rather, will they be used simply to supply an outline or a suggested first draft? Clearly, additional research into the nature, degree, and efficiency of the AI or IA contributions to various use cases is needed to ensure that these efforts and resources are deployed effectively and appropriately.

Open questions remain about the conditions in which human interaction and oversight is necessary to produce more effective results, for example, whether the human or the AI should be the primary driver in the collaboration, and whether or how increased interpretability and explainability of AI models is necessary for acceptable and successful human-AI collaboration in the legal domain. Proposals on how best to evaluate various methods of human augmentation are welcome, as are analyses of the ethical implications of adopting AI as replacement versus AI as augmentation in legal applications. Human augmentation may be focused on a number of different areas, such as legal practitioners, consumers of legal services, or business concerns around legal topics.

Participation is invited through the submission of original works on topics relevant to these research themes, including but not limited to:

Overarching Themes

  • Studies of ML systems and their impact on human effectiveness and reliability, including DL and LLM-based systems
  • Performance evaluation (AI automation vs. human-in-the-loop IA approaches)
  • User-based experiments (with varying roles and degrees of human involvement)
  • AI/IA application risks, including those introduced by the absence of a human-in-the-loop

Human-Computer Interaction Studies in the Context of AI or IA Systems

  • The impact of autonomous ML-trained systems vs. ML-trained systems with human oversight
  • Novel interaction techniques for legal technology systems
  • User studies relevant to legal professionals and tasks
  • Empirical studies/experiments on human-AI collaboration
  • Collaborative information seeking and social search, including social utility and network analysis for information interaction
  • User-centered evaluation methods and measures, including measures of user experience and performance
  • Experimental work on task design, legal data analysis methods, and usability

Explainability, Fairness, Ethics

  • Examination of the role of transparency, explainability or interpretability in AI-supported systems
  • Topics involving bias, discrimination, fairness, or justice in AI systems
  • Accountability in AI-supported systems, including organizational, design & development, and deployment perspectives
  • Designing ethics or compliance with law into computational systems
  • Algorithm appreciation (bias in favor of algorithms over humans) or algorithm aversion (bias in favor of humans over algorithms)

Specific Applications

  • Litigation tasks, including analysis and/or, drafting of litigation-related documents
  • Technology aided discovery or electronic data discovery (EDD)
  • Legal services tasks, including document review, generation, response
  • Systems that provide specific guidance to consumers and other end users who are not legal experts, including expert systems, user support systems, and chatbots
  • ChatGPT (GPT-3.5/4.0) and Bard-based applications and related deployments
  • Contracting tasks, including analysis, review, drafting, and negotiation
  • Internal compliance monitoring
  • Protection of sensitive data
  • Data breach response
  • Investigations of criminal behavior
  • Forensic Science
  • Improved handling and interpretation of data from collaboration platforms
  • Generation of synthetic test data for legal and e-discovery applications
  • Any additional use of AI and IA tools for legal services and related legal tasks

General Statement on Submissions
Submissions are not limited to the specific examples above. Any research involving the application of AI or ML to the legal domain, as well as complementary technologies such as NLP, IR, DM, NER, is welcome as a workshop submission to submit to the workshop. Material that addresses some of the themes and issues presented above should be integral to the discussion section of such works.

Workshop Structure
This full-day workshop will consist of four distinct components:

  1. Presentation of approximately eight peer-reviewed papers (Reviews shared by members of the PC).
  2. Two keynote speakers (one morning and one afternoon), one from academia, one from industry.
  3. Panel discussion among four experts in the field plus one moderator, on a seminal topic like the AI vs. IA prospects for ChatGPT.
  4. Break-out session consisting of smaller groups, each one focusing on a distinct sub-topic or debated issue in this space, with time for each small group to “report back” to the larger group in a final workshop panel.

Keynote Speakers
We are delighted to announce our two invited talks for the 3rd LegalAIIA Workshop.

Dr. Frank Schilder is a Senior Research Director at Thomson Reuters with TR Labs, leading a team of researchers to explore new machine learning and artificial intelligence techniques in order to create smart products for legal NLP problems. His research interests include summarization, question answering and information extraction, and natural language generation. Frank received the master’s degree in computer science (diplom-informatik) from the University of Hamburg and the Ph.D. degree in cognitive science from the University of Edinburgh, Scotland. Before joining Thomson Reuters, he was an Assistant Professor at the Department for Informatics, University of Hamburg, Germany.

Title: Legal Expertise Meets Artificial Intelligence: A Critical Analysis of Large Language Models as Intelligent Assistance Technology

Abstract:
This talk investigates an intelligent assistance (IA) approach to utilizing Large Language Models (LLMs) in the legal domain by addressing the risks associated with unchecked artificial intelligence (AI) applications. We emphasize the importance of understanding the distinctions between AI and IA, with the latter involving human-in-the-loop decision-making processes, which can help mitigate risks and ensure responsible use of this rapidly developing technology.

Using ChatGPT and GPT-4 as a prime example, we demonstrate its dual role as both an AI and IA application, showcasing its versatility in a variety of legal tasks. We look at recently reported explorations in particular in using very LLMs in addressing tasks such as multiple-choice question answering, legal reasoning, case outcome prediction, and summarization. We argue that to fully achieve "augmented intelligence," a reasoning and knowledge base component is required, allowing IA systems to effectively support human users in decision-making processes.

Dr. Kristian J. Hammond, from the McCormick School of Engineering at Northwestern University, will be our first Keynote speaker. His talk will concentrate on the AI theme of our workshop. Dr. Hammond is the Bill and Cathy Osborn Professor of Computer Science at Northwestern University and the co-founder of the Artificial Intelligence company Narrative Science. His primary research is in the field of data analytics and human/machine communication. Dr. Hammond also conducts research at the intersection of law and technology, including developing platforms for analyzing legal data.
https://www.mccormick.northwestern.edu/research-faculty/directory/profil...

Title & Abstract (to be posted soon on the workshop website)
Focus: An alternative perspective on LLMs/ChatGPT

AI & Ethics at LegalAIIA
All submissions to LegalAIIA are strongly encouraged to include a section that addresses the ethical implications of the work being undertaken as well as topics related to the subject matter of the paper. Full papers that are focused on this topic within the specific context of the conference would also be appropriate for submission to the workshop.

Background
This workshop is an outgrowth of the popular and successful decade-long DESI (Discovery for Electronically Stored Information) series of workshops. (See http://users.umiacs.umd.edu/~oard/desi7/#History)

Important Dates:

  • Submission Deadline: Friday, May 5th, 2023
  • Notification of Acceptance: Friday, May 26th, 2023
  • Camera-Ready Versions Due: Friday, June 16th, 2023
  • Workshop date: Monday, June 19th, 2023

Workshop Organizers:

  • Jack G. Conrad, Thomson Reuters Labs (Co-chair)
  • Daniel W. Linna, Northwestern University (Co-chair)
  • Jason R. Baron, University of Maryland, College Park
  • Amanda Jones, Lighthouse Law
  • Jeremy Pickens, Redgrave Data
  • Aileen Nielsen, ETH Zurich and Harvard Law School
  • Paheli Bhattacharya, Indian Institute of Technology Kharagpur
  • Hans Henseler, Univ. of Applied Sciences Leiden and Netherlands Forensics Institute
  • Jyothi Vinjumur, CBRE Legal

Program Committee:

  • Apoorv Agarwal, Relativity (formerly Text IQ)
  • Evangelos Kanoulas, Informatics Institute, University of Amsterdam
  • David Lewis, Redgrave Data
  • Douglas W. Oard, University of Maryland, College Park
  • Fabrizio Sebastiani, Italian National Research Council (IST-CNR)
  • Antigone Peyton, Ridgeline International
  • Amy Sellars, CBRE Group
  • Gineke Wiggers, Leiden University
  • Adam Roegiest, Zuva
  • Eugene Yang, Johns Hopkins (postdoc)

Paper types:
In view of the novelty of this workshop focus, three primary types of papers are sought: Research, Application and Ideation. Research papers should include some form of empirical evaluation, appropriate to the nature of the paper. Application papers should actively demonstrate one or more of the themes above involving comparative studies, evaluation or other type of examination. Ideation papers are similar to a position paper in that experimental results are not required. However, while Ideation papers permit discussion of ideas that are not tested, they should be testable. An Ideation paper should contain concrete proposals on how the ideas would be evaluated and implemented, and the significance or impact of doing so.

Submission guidelines:
Papers should be no longer than 12 pages. Authors should submit their papers using the same two-column paper format used for CEUR-WS proceedings. Papers must be formatted as per the CEUR proceedings guidelines: https://ceur-ws.org/HOWTOSUBMIT.html . Latex style files and MS Word templates (docx) can be found in this zip file: http://ceur-ws.org/Vol-XXX/CEURART.zip . The 2-column versions should be used.

While papers can be prepared using LaTeX or Word, all papers should be converted to PDF prior to submission. Reviews will not be blind, so papers should contain author information.

Papers should be submitted to the following email address: organizers [at] legalAIIA [dot] org, with the subject line “RESEARCH” or “APPLICATION” or “IDEATION”, based on the submitted paper type.

Questions may be directed to the co-organizers via the email addresses included below. At least one author of an accepted paper is expected to attend the workshop either in-person or virtually.

Proceedings:
All accepted workshop papers will be included in the main LegalAIIA 2023 Proceedings and published by CEUR in the CEUR Workshop Proceedings.

Publication opportunities:
Papers accepted for presentation will be available online prior to the workshop. Papers will also be published on the CEUR workshop website and indexed by the DBLP Computer Science database.

Examples from LegalAIIA 2019:
CEUR-WS Workshop website: http://ceur-ws.org/Vol-2484/
Indexed DBLP database: https://dblp.org/db/conf/icail/legalaiia2019.html#RietveldRK19

Selected papers may be proposed for publication in a special issue of the Artificial Intelligence and Law journal or as chapters in a book on AI and Intelligent Assistance for Legal Professionals in the Digital Workplace.

Workshop Website:
https://sites.google.com/view/legalaiia2023/

Registration
https://icail2023.di.uminho.pt (under Registration)

Contacts:
Jack G. Conrad, Thomson Reuters: jackgconrad [at] gmail [dot] com
Daniel W. Linna, Northwestern University: daniel.linna [at] law [dot] northwestern [dot] edu

Get in touch with us on Linkedin

Join the IAAIL group on Linkedin! Your posts will be shared on the IAAIL website

Contact us

International Association for AI and Law

Send us an email