As AI, such as ChatGPT, continues to evolve and become more commonplace, many trade and professional associations are turning to AI technology to enhance their operations and decision-making processes and benefit their members. However, as with any emerging technology, the use of AI by associations raises a number of important legal issues that must be carefully considered and worked through.
Data Privacy
One of the primary legal issues associated with the use of AI by associations is data privacy. AI systems rely on vast amounts of data to train and improve their algorithms, and associations must ensure that the data they collect is used in accordance with applicable state, federal and international privacy laws and regulations.
Associations must be transparent with their members about how their data will be collected, used and protected, and must obtain the necessary member consents to use and share sensitive data. Remember that data (such as confidential membership information) that is inputted into an AI system such as ChatGPT will, in most circumstances, no longer remain confidential and protected and will be subject to the AI system’s most current terms of use or service. As such, associations should not allow their staff, volunteer leaders or other agents to input into an AI system any personal data, data constituting a trade secret, data which is confidential or privileged or data that may not otherwise be disclosed to third parties.
Intellectual Property
Intellectual property is a key legal issue associations must consider when using AI. AI systems can generate new works of authorship, such as software programs, artistic works, articles and white papers. Associations must ensure that they have the necessary rights and licenses to use and distribute these works. They also must be transparent about who or what created such works.
Take steps to ensure that AI-generated content is not, for instance, registered with the U.S. Copyright Office as the association’s own unless it has been sufficiently modified to become a product of human creation and an original work of authorship of the association. Associations also must be mindful of any third-party intellectual property rights that may be implicated by their use of AI, such as copyrights or patents owned by AI vendors, developers or others, and ensure they do not infringe on any third-party copyright, patent or trademark rights.
Be mindful not to permit the inputting into an AI system of any confidential or otherwise-protected content (such as trade secrets or information subject to a nondisclosure obligation or the attorney-client privilege), as such content will no longer be protected and confidential.
Discrimination
Another legal issue to consider is discrimination. AI systems can inadvertently perpetuate bias and discrimination, particularly if they are trained on data that reflects historic biases or inequalities. Associations must ensure that their AI systems do not discriminate based on race, ethnicity, national origin, gender, age, disability or other legally protected characteristics, and must take steps to identify and address any biases that may be present in their algorithms.
For instance, the use by large employers of AI systems to help screen applicant resumes and even analyze recorded job interviews is rapidly growing. If AI penalizes candidates because it cannot understand a person’s accent or speech impediment, for instance, that could potentially lead to illegal employment discrimination.
While this will only become a legal issue in certain contexts, such as the workplace, the use of AI has the potential to create discriminatory effects in other association settings, such as membership and volunteer leadership, and needs to be carefully addressed.
Tort Liability
Associations must consider the potential tort liability issues that may arise from their use of AI. If an AI system produces inaccurate, negligent or biased results that harm members or other end users, the association could potentially be held liable for any resulting damages. Associations must therefore ensure that their AI systems are reliable and accurate, and that all resulting work product (such as industry or professional standards set by an association) is carefully vetted for accuracy, veracity, completeness and efficacy.
Insurance
Associations need to ensure that they have appropriate insurance coverage in place to protect against potential liability claims in all these areas of legal risk. Note that traditional nonprofit directors and officers (D&O) liability, commercial general liability and cyber insurance policies may be — and likely are — insufficient to fully protect associations in all of these areas. Associations also should explore acquiring an errors and omissions liability and media liability insurance policy to fill those coverage gaps.
Conclusion
While the use of AI presents numerous opportunities and benefits, there are a number of legal issues that need to be carefully considered before associations go too far down the AI path. Among other things, associations must ensure they are transparent with members about the use of their data, obtain necessary intellectual property rights and licenses and avoid infringing others’ rights, address any potential biases in their algorithms, protect themselves against potential tort liability claims and secure appropriate insurance coverage to protect against these risks.
This article was republished with permission from tenenbaumlegal.com.
For more news, see a full list of the latest ACR Bulletin articles.